-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Updated Solution Analyzer to identify additional connectors #13328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
| import argparse | ||
| import csv | ||
| import re | ||
| import os |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
To fix an unused import, remove the corresponding import statement when no names from that module are referenced anywhere in the file. This avoids unnecessary dependencies and slightly improves readability and startup time.
In this file, the single best fix is to delete the import os line at line 30 of Tools/Solutions Analyzer/collect_table_info.py, leaving all other imports unchanged. No new methods, imports, or definitions are required, and existing functionality will be preserved because nothing in the file uses the os module.
| @@ -27,7 +27,6 @@ | ||
| import argparse | ||
| import csv | ||
| import re | ||
| import os | ||
| import sys | ||
| import hashlib | ||
| import time |
| import time | ||
| import json | ||
| from pathlib import Path | ||
| from io import StringIO |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
To fix an unused import, the general approach is simply to remove the import statement (or the unused name within a multi-name import) so that only actually used symbols are imported. This eliminates unnecessary dependencies and makes the code clearer.
In this case, the best fix is to delete the line from io import StringIO in Tools/Solutions Analyzer/collect_table_info.py, since CodeQL reports that StringIO is not referenced elsewhere. No other changes are necessary because removing an unused import does not affect runtime behavior. Specifically, in collect_table_info.py, on line 36, remove the entire import line. No additional methods, imports, or definitions are required.
| @@ -33,7 +33,6 @@ | ||
| import time | ||
| import json | ||
| from pathlib import Path | ||
| from io import StringIO | ||
| from typing import Dict, List, Optional, Tuple, Set | ||
| from dataclasses import dataclass, field | ||
| from html.parser import HTMLParser |
| import json | ||
| from pathlib import Path | ||
| from io import StringIO | ||
| from typing import Dict, List, Optional, Tuple, Set |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
To fix an unused import, remove just the unused name from the import statement while leaving the used ones intact. This reduces unnecessary dependencies and makes the code clearer, without altering functionality.
In this file, the import line from typing import Dict, List, Optional, Tuple, Set appears around line 37 in Tools/Solutions Analyzer/collect_table_info.py. Since only Tuple is reported as unused, the best minimal change is to remove Tuple from that list, resulting in from typing import Dict, List, Optional, Set. No other code changes or new imports are required, and there is no impact on runtime behavior, as this is purely a typing-related import.
-
Copy modified line R37
| @@ -34,7 +34,7 @@ | ||
| import json | ||
| from pathlib import Path | ||
| from io import StringIO | ||
| from typing import Dict, List, Optional, Tuple, Set | ||
| from typing import Dict, List, Optional, Set | ||
| from dataclasses import dataclass, field | ||
| from html.parser import HTMLParser | ||
|
|
| from pathlib import Path | ||
| from io import StringIO | ||
| from typing import Dict, List, Optional, Tuple, Set | ||
| from dataclasses import dataclass, field |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
To fix an unused import, remove the unused symbol from the import statement while keeping any symbols that are actually used. In this case, we keep dataclass and remove field from the from dataclasses import ... line.
Concretely, in Tools/Solutions Analyzer/collect_table_info.py at line 38, change from dataclasses import dataclass, field to import only dataclass. No other changes are required, and no new methods or imports are needed.
-
Copy modified line R38
| @@ -35,7 +35,7 @@ | ||
| from pathlib import Path | ||
| from io import StringIO | ||
| from typing import Dict, List, Optional, Tuple, Set | ||
| from dataclasses import dataclass, field | ||
| from dataclasses import dataclass | ||
| from html.parser import HTMLParser | ||
|
|
||
| # Try to import requests for web URL support |
| from io import StringIO | ||
| from typing import Dict, List, Optional, Tuple, Set | ||
| from dataclasses import dataclass, field | ||
| from html.parser import HTMLParser |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
To fix an unused import, the standard approach is to remove the import statement entirely, provided the imported symbol is not used anywhere in the file. In this case, the specific issue is from html.parser import HTMLParser on line 39 in Tools/Solutions Analyzer/collect_table_info.py.
The best fix without changing existing functionality is simply to delete that line and leave all other imports and code untouched. Since html.parser is part of the standard library and there is no indication that HTMLParser is accessed indirectly (e.g., via the module’s globals), this removal will not affect runtime behavior.
Concretely, in Tools/Solutions Analyzer/collect_table_info.py, remove line 39 containing from html.parser import HTMLParser. No other code changes, imports, methods, or definitions are required.
| @@ -36,7 +36,6 @@ | ||
| from io import StringIO | ||
| from typing import Dict, List, Optional, Tuple, Set | ||
| from dataclasses import dataclass, field | ||
| from html.parser import HTMLParser | ||
|
|
||
| # Try to import requests for web URL support | ||
| try: |
| from bs4 import BeautifulSoup | ||
| HAS_BS4 = True | ||
| except ImportError: | ||
| HAS_BS4 = False |
Check notice
Code scanning / CodeQL
Unused global variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, unused global variables should either be removed or renamed to a conventional "unused" name if they are kept for documentation purposes. Here, the boolean HAS_BS4 is clearly intended as a capability flag, but since it is not actually used, the cleanest fix is to remove it while preserving the conditional import behavior. That means deleting the assignments to HAS_BS4 and the variable itself, but keeping the try/except block that attempts to import BeautifulSoup and silently falls back if it is not present.
Concretely, in Tools/Solutions Analyzer/collect_table_info.py around lines 48–53, we will simplify the BeautifulSoup import section to just a try/except around the import, and drop HAS_BS4 entirely. No other parts of the file need to be changed, and no new imports or helper methods are required.
-
Copy modified lines R52-R53
| @@ -48,9 +48,9 @@ | ||
| # Try to import BeautifulSoup for better HTML parsing | ||
| try: | ||
| from bs4 import BeautifulSoup | ||
| HAS_BS4 = True | ||
| except ImportError: | ||
| HAS_BS4 = False | ||
| # BeautifulSoup is optional; fall back to built-in HTMLParser if unavailable | ||
| pass | ||
|
|
||
| # Default documentation URLs (using learn.microsoft.com directly) | ||
| AZURE_MONITOR_TABLES_CATEGORY = 'https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables-category' |
| 'timestamp': time.time(), | ||
| 'size': len(content) | ||
| }, f) | ||
| except IOError: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, empty except blocks should either (a) handle the exception (log, clean up, update state) or (b) explicitly document why ignoring the exception is safe. For non-critical operations like best-effort caching, the ideal fix is to log the error at a low verbosity level (e.g., to stderr) and continue, preserving current behavior while making issues observable.
For this specific case, we should update write_to_cache so that the except IOError: block is no longer empty. The best minimal change is to write a short message to sys.stderr indicating that writing to the cache failed, optionally including the cache path and the exception message, then return. This keeps cache failures non-fatal but visible when someone runs the script in a terminal. We can reuse the existing sys import at the top of the file and avoid new dependencies.
Concretely, in Tools/Solutions Analyzer/collect_table_info.py, around lines 172–185, we will replace:
except IOError:
passwith something like:
except IOError as e:
# Cache writes are best-effort; ignore failures but log for diagnostics.
print(f"Warning: failed to write cache file '{cache_path}': {e}", file=sys.stderr)No new imports or helper methods are required; sys is already imported at line 31. This change does not alter the control flow (the function effectively still “ignores” the failure after logging) but satisfies the static analysis tool and improves maintainability.
-
Copy modified lines R185-R187
| @@ -182,8 +182,9 @@ | ||
| 'timestamp': time.time(), | ||
| 'size': len(content) | ||
| }, f) | ||
| except IOError: | ||
| pass | ||
| except IOError as e: | ||
| # Cache writes are best-effort; ignore failures but log for diagnostics. | ||
| print(f"Warning: failed to write cache file '{cache_path}': {e}", file=sys.stderr) | ||
|
|
||
|
|
||
| def clear_cache() -> int: |
| try: | ||
| cache_file.unlink() | ||
| count += 1 | ||
| except IOError: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, empty except blocks should either (1) be removed so exceptions propagate, or (2) contain explicit, documented handling such as logging, fallback behavior, or a clear justification comment. For non-critical cleanup, the common pattern is to log the error at a low severity and continue.
Here, the best fix without changing existing functionality is to keep the “best effort” behavior (do not raise), but log failures to delete cache or metadata files to sys.stderr and add a brief comment explaining that cache deletion errors are non-fatal. This maintains the current behavior (the function still returns the count of successfully deleted files and does not crash), while avoiding a silent swallow of IO problems.
Concretely:
- In
clear_cache, replace the twoexcept IOError: passblocks (aroundcache_file.unlink()andmeta_file.unlink()) withexcept OSError as e:blocks (a bit broader thanIOError, but still appropriate for file operations) that:- Write a short message to
sys.stderrusingprint(..., file=sys.stderr)indicating which file could not be deleted and why. - Optionally include a comment explaining that errors are intentionally ignored except for logging.
- Write a short message to
- No new imports are needed;
sysis already imported at the top of the file.
-
Copy modified lines R199-R201 -
Copy modified lines R206-R208
| @@ -196,14 +196,16 @@ | ||
| try: | ||
| cache_file.unlink() | ||
| count += 1 | ||
| except IOError: | ||
| pass | ||
| except OSError as e: | ||
| # Best-effort cleanup: log and continue; cache delete failures are non-fatal | ||
| print(f"Warning: failed to delete cache file '{cache_file}': {e}", file=sys.stderr) | ||
|
|
||
| for meta_file in _cache_dir.glob('*.meta'): | ||
| try: | ||
| meta_file.unlink() | ||
| except IOError: | ||
| pass | ||
| except OSError as e: | ||
| # Best-effort cleanup: log and continue; metadata delete failures are non-fatal | ||
| print(f"Warning: failed to delete cache metadata file '{meta_file}': {e}", file=sys.stderr) | ||
|
|
||
| return count | ||
|
|
| for meta_file in _cache_dir.glob('*.meta'): | ||
| try: | ||
| meta_file.unlink() | ||
| except IOError: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, empty except blocks should either (1) handle the exception meaningfully (logging, cleanup, fallback) or (2) explicitly document why it is safe to ignore, typically with a comment and possibly narrowing the exception type. Here, the optimal fix is to add lightweight logging to the IOError handlers in write_to_cache and clear_cache, without changing control flow: failures remain non-fatal, but are no longer completely silent.
Concretely:
- In
write_to_cache, replace the bareexcept IOError: passwithexcept IOError as e:that emits a brief message tosys.stderrmentioning that cache write failed, and then returns as before. - In
clear_cache, similarly replace eachexcept IOError: passblock withexcept IOError as e:and log that a given file could not be deleted, then continue the loop. This preserves behavior (files that fail to delete are just skipped), but surfaces the issue. - No new imports are needed because
sysis already imported at the top of the file.
These changes should be made in Tools/Solutions Analyzer/collect_table_info.py around lines 185–186 and 199–206 as shown in the snippet.
-
Copy modified lines R185-R187 -
Copy modified lines R200-R202 -
Copy modified lines R207-R209
| @@ -182,8 +182,9 @@ | ||
| 'timestamp': time.time(), | ||
| 'size': len(content) | ||
| }, f) | ||
| except IOError: | ||
| pass | ||
| except IOError as e: | ||
| # Cache write failures are non-fatal; log and continue without caching. | ||
| print(f"Warning: failed to write cache for URL {url!r} at {cache_path}: {e}", file=sys.stderr) | ||
|
|
||
|
|
||
| def clear_cache() -> int: | ||
| @@ -196,14 +197,16 @@ | ||
| try: | ||
| cache_file.unlink() | ||
| count += 1 | ||
| except IOError: | ||
| pass | ||
| except IOError as e: | ||
| # Cache delete failures are non-fatal; log and continue. | ||
| print(f"Warning: failed to delete cache file {cache_file}: {e}", file=sys.stderr) | ||
|
|
||
| for meta_file in _cache_dir.glob('*.meta'): | ||
| try: | ||
| meta_file.unlink() | ||
| except IOError: | ||
| pass | ||
| except IOError as e: | ||
| # Metadata delete failures are non-fatal; log and continue. | ||
| print(f"Warning: failed to delete cache metadata file {meta_file}: {e}", file=sys.stderr) | ||
|
|
||
| return count | ||
|
|
…zure/Azure-Sentinel into tools/map-connectors-to-tables
| continue | ||
| plural_sources = table_info.get("plural_sources") or [] | ||
| mismatch = table_info.get("has_mismatch", False) | ||
| actual_name = table_info.get("actual_table") |
Check warning
Code scanning / CodeQL
Variable defined multiple times Warning
redefined
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, to fix "variable defined multiple times" where the first assignment is overwritten before any use, we either remove the redundant assignment or, if the right-hand side has important side effects, preserve those side effects without storing the result in an unused variable.
Here, the assignment actual_name = table_info.get("actual_table") at line 1304 is redundant because actual_name is not used anywhere in this loop body. The right-hand side is a simple dictionary lookup (table_info.get(...)) with no side effects, so it is safe to delete this line entirely. We keep the earlier logic around line 1215 intact, because that use of actual_name directly participates in a condition and assignment. Concretely, in Tools/Solutions Analyzer/map_solutions_connectors_tables.py, within the loop starting around line 1295, remove the line that assigns actual_name = table_info.get("actual_table"). No new imports, methods, or definitions are needed.
| @@ -1301,7 +1301,6 @@ | ||
| continue | ||
| plural_sources = table_info.get("plural_sources") or [] | ||
| mismatch = table_info.get("has_mismatch", False) | ||
| actual_name = table_info.get("actual_table") | ||
| if plural_sources: | ||
| plural_list = ", ".join(sorted(plural_sources)) | ||
| add_issue( |
| total = len(lines) | ||
| while idx < total: | ||
| line = lines[idx] | ||
| stripped = line.strip() |
Check notice
Code scanning / CodeQL
Unused local variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, to fix an unused local variable you either (a) remove the variable and its assignment if it has no side effects, or (b) if the right-hand side has side effects that must be preserved, keep the expression but drop the unused variable name, or (c) rename the variable to an “intentionally unused” name (like _) when the value is deliberately ignored. Here, line.strip() has no necessary side effects, and the stripped variable is not read, so the best fix is simply to remove the assignment line.
Specifically, in Tools/Solutions Analyzer/map_solutions_connectors_tables.py, in the _extract_function_queries_from_lines function, remove the line stripped = line.strip() at line 695. No additional imports or new definitions are required, and no other code within that function depends on stripped, so this change does not alter existing functionality.
| @@ -692,7 +692,6 @@ | ||
| total = len(lines) | ||
| while idx < total: | ||
| line = lines[idx] | ||
| stripped = line.strip() | ||
| if ":" not in line: | ||
| idx += 1 | ||
| continue |
| cleaned_content = '\n'.join(cleaned_lines) | ||
|
|
||
| # 2. Attempt to fix trailing commas before } or ] | ||
| import re |
Check notice
Code scanning / CodeQL
Module is imported more than once Note
on line 6
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, to fix a “module is imported more than once” issue in Python, remove the redundant import and rely on the original import, unless there is a deliberate need for a conditional or lazy import. Here, re is imported once at the top of map_solutions_connectors_tables.py (line 6) and then imported again inside read_json (line 150). The best fix is to delete the inner import re and leave the outer import intact, since re is already available in the function’s scope through the module-level import. This maintains all existing functionality because re.sub will continue to work exactly as before, using the top-level import.
Concretely, in Tools/Solutions Analyzer/map_solutions_connectors_tables.py, within the read_json function, remove line 150 containing import re and keep the subsequent call to re.sub unchanged. No new methods, imports, or definitions are required, and no other lines need to be modified.
| @@ -147,7 +147,6 @@ | ||
| cleaned_content = '\n'.join(cleaned_lines) | ||
|
|
||
| # 2. Attempt to fix trailing commas before } or ] | ||
| import re | ||
| cleaned_content = re.sub(r',(\s*[}\]])', r'\1', cleaned_content) | ||
|
|
||
| # Try parsing cleaned content |
| return path.as_posix() | ||
|
|
||
|
|
||
| def read_json(path: Path) -> Optional[Any]: |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, to fix “explicit returns mixed with implicit returns” you ensure that all control-flow paths through the function end in an explicit return, even if that return is return None. That way, other readers (and static analyzers) can see that you have intentionally handled all cases.
For read_json, we should keep its behavior the same: on success it should return the parsed JSON value; on any failure it should return None. The shown code already explicitly returns None in the except blocks. To avoid any remaining implicit fall-through, we add a final return None at the end of the function body, after the try/except block. This covers any unusual path where execution might exit the try without hitting an earlier return (for example, if new code is later added inside the try that doesn’t return). No imports or helper methods are required; we only add a single line at the end of the function.
Concretely: in Tools/Solutions Analyzer/map_solutions_connectors_tables.py, inside read_json, after the last except block (except Exception as exc: ... return None), add return None as the final statement in the function.
-
Copy modified line R178
| @@ -175,6 +175,7 @@ | ||
| except Exception as exc: | ||
| print(f"Failed to read {path}: {exc}") | ||
| return None | ||
| return None | ||
|
|
||
|
|
||
| def remove_line_comments(text: str) -> str: |
| import re | ||
| import argparse | ||
| from collections import defaultdict | ||
| from datetime import datetime |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, unused imports should be removed to keep dependencies minimal and the code clear. If an imported name is genuinely not used anywhere in the file, the correct fix is to delete that import line.
For this specific case, the best fix is to remove the line from datetime import datetime from Tools/Solutions Analyzer/map_solutions_connectors_tables.py. This does not change any functionality because datetime is not referenced elsewhere according to the static analysis result. No other lines need to be updated, and no additional methods or definitions are required.
Concretely, edit Tools/Solutions Analyzer/map_solutions_connectors_tables.py and delete line 9 that imports datetime, leaving all other imports and code intact.
| @@ -6,7 +6,6 @@ | ||
| import re | ||
| import argparse | ||
| from collections import defaultdict | ||
| from datetime import datetime | ||
| from pathlib import Path | ||
| from typing import Any, Dict, Iterable, List, Optional, Set, Tuple | ||
| from urllib.parse import quote |
|
|
||
| import csv | ||
| import json | ||
| import os |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
To fix an unused import, you remove the corresponding import statement from the file so that the module is no longer listed as a dependency. This reduces clutter and avoids misleading readers about what modules are actually required.
In this specific case, the best fix is to delete the import os line near the top of Tools/Solutions Analyzer/map_solutions_connectors_tables.py and leave all other imports unchanged. This does not alter runtime behavior if os is truly unused, and it directly addresses the CodeQL warning. Concretely, edit the import block around lines 3–7 so that csv, json, re, argparse, etc., remain, but the os line is removed. No new methods, helpers, or additional imports are required.
| @@ -2,7 +2,6 @@ | ||
|
|
||
| import csv | ||
| import json | ||
| import os | ||
| import re | ||
| import argparse | ||
| from collections import defaultdict |
| print(f" Uploading data (streaming)...") | ||
| try: | ||
| # Use ingest_from_file for streaming ingestion | ||
| result = ingest_client.ingest_from_file(str(csv_path), ingestion_properties=ingestion_props) |
Check notice
Code scanning / CodeQL
Unused local variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, to fix an unused local variable when the right-hand side has important side effects, you should keep the expression (so the side effects still happen) but remove the assignment target, or rename the variable to an accepted “intentionally unused” name (such as _) if you truly need the binding. Here, the ingestion side effect is the call to ingest_client.ingest_from_file(...); assigning its return value to result is unnecessary because result is never used.
The best fix with no functional change is to remove the result = part and just call ingest_client.ingest_from_file(...) as a standalone statement. This preserves the ingestion behavior and exception semantics while eliminating the unused local variable. Concretely, in Tools/Solutions Analyzer/solution_analyzer_upload_to_kusto.py, within the function that uploads the CSV using streaming ingestion, replace the line that assigns to result with the same call without assignment. No imports, methods, or additional definitions are needed.
-
Copy modified line R137
| @@ -134,7 +134,7 @@ | ||
| print(f" Uploading data (streaming)...") | ||
| try: | ||
| # Use ingest_from_file for streaming ingestion | ||
| result = ingest_client.ingest_from_file(str(csv_path), ingestion_properties=ingestion_props) | ||
| ingest_client.ingest_from_file(str(csv_path), ingestion_properties=ingestion_props) | ||
| print(f" Successfully ingested {csv_path.name}") | ||
| return True | ||
| except Exception as e: |
| if not next_part.endswith('.json'): | ||
| connector_folder_name = next_part | ||
| connector_json_folder = "/".join(parts[dc_idx:dc_idx+2]) | ||
| except (IndexError, ValueError): |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, empty except blocks should either (1) log/report the exception, (2) transform and re-raise it, or (3) have a clear comment justifying that ignoring it is safe. For non-critical tasks like heuristic parsing, logging a warning and continuing is usually the least disruptive fix.
Here, we can keep the control flow unchanged (still fall back to other strategies when parsing fails) but replace the bare pass with a diagnostic action. The simplest non-invasive approach is to print an informative message to sys.stderr including the offending URL and the exception; this avoids adding new dependencies and keeps behavior mostly the same while making failures visible. For the README reading block, we should likewise log a warning if reading fails instead of silently ignoring errors.
Concrete changes in Tools/Solutions Analyzer/generate_connector_docs.py:
-
In the
tryblock that parsesfile_urland the subsequentexcept (IndexError, ValueError): pass, replacepasswith a small block that writes a warning tosys.stderr, includingfile_urland the exception, e.g.:except (IndexError, ValueError) as exc: print(f"Warning: Failed to parse connector file URL '{file_url}': {exc}", file=sys.stderr)
This preserves the current fallback behavior but surfaces problems.
-
In the
try/except Exception:around readingreadme_file, replacepasssimilarly with a warning that includesreadme_fileand the exception:except Exception as exc: print(f"Warning: Failed to read connector documentation file '{readme_file}': {exc}", file=sys.stderr)
No new imports are required because sys is already imported at the top of the file. Functionality (in terms of which docs are ultimately found or not) remains unchanged; we only improve observability of failure cases.
-
Copy modified lines R142-R146 -
Copy modified lines R168-R172
| @@ -139,8 +139,11 @@ | ||
| if not next_part.endswith('.json'): | ||
| connector_folder_name = next_part | ||
| connector_json_folder = "/".join(parts[dc_idx:dc_idx+2]) | ||
| except (IndexError, ValueError): | ||
| pass | ||
| except (IndexError, ValueError) as exc: | ||
| print( | ||
| f"Warning: Failed to parse connector file URL '{file_url}': {exc}", | ||
| file=sys.stderr, | ||
| ) | ||
|
|
||
| # Strategy 1: Look for any .md file in connector's dedicated subfolder | ||
| for dc_folder in data_connector_folders: | ||
| @@ -162,8 +165,11 @@ | ||
| content = readme_file.read_text(encoding='utf-8') | ||
| rel_path = str(readme_file.relative_to(solutions_dir)) | ||
| return content, rel_path | ||
| except Exception: | ||
| pass | ||
| except Exception as exc: | ||
| print( | ||
| f"Warning: Failed to read connector documentation file '{readme_file}': {exc}", | ||
| file=sys.stderr, | ||
| ) | ||
|
|
||
| # Strategy 2: Look for README with connector name in filename | ||
| for dc_folder in data_connector_folders: |
| # Check if this is a subfolder (not a JSON file) | ||
| if not next_part.endswith('.json'): | ||
| connector_folder_name = next_part | ||
| connector_json_folder = "/".join(parts[dc_idx:dc_idx+2]) |
Check notice
Code scanning / CodeQL
Unused local variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 3 hours ago
In general, to fix an unused local variable, either (a) delete the variable (and its assignment) if it has no side effects and is not needed, or (b) if the assignment expression has side effects you must preserve, remove only the variable binding (for example, by calling the expression directly), or (c) if it is intentionally unused (for clarity or documentation), rename it to an accepted “unused” pattern (_, unused_..., etc.).
Here, the right-hand side of the assignment to connector_json_folder is a pure string construction ("/".join(parts[dc_idx:dc_idx+2])) with no side effects, and we don’t see any use of the value. The best fix that doesn’t change functionality is to remove connector_json_folder entirely: delete its initialization at line 121 and drop the assignment at line 141, leaving only connector_folder_name = next_part. No imports or additional definitions are required.
Concretely, in Tools/Solutions Analyzer/generate_connector_docs.py:
- In the variable initialization section before parsing
connector_files, remove theconnector_json_folder = Noneline. - In the inner loop where
connector_folder_nameandconnector_json_folderare assigned, remove the assignment toconnector_json_folder, keeping the rest of the logic intact.
| @@ -118,7 +118,6 @@ | ||
|
|
||
| # Parse connector file paths to find the connector's folder | ||
| connector_folder_name = None | ||
| connector_json_folder = None | ||
| if connector_files: | ||
| for file_url in connector_files.split(';'): | ||
| file_url = file_url.strip() | ||
| @@ -138,7 +137,6 @@ | ||
| # Check if this is a subfolder (not a JSON file) | ||
| if not next_part.endswith('.json'): | ||
| connector_folder_name = next_part | ||
| connector_json_folder = "/".join(parts[dc_idx:dc_idx+2]) | ||
| except (IndexError, ValueError): | ||
| pass | ||
|
|
v4.0
Data Connectors(standard, with space)DataConnectors(no space) - adds solutions such as Alibaba Cloud, CyberArkEPM, IronNet IronDefense, MarkLogicAudit, Open Systems, PDNS Block Data Connector, SlashNextData Connector(singular) - adds IoTOTThreatMonitoringwithDefenderforIoTidfield[variables(...)]in id now generate ID from titleconnector_id_generatedcolumn to track when connector ID was auto-generated from titleno_table_definitionsto output with empty table field (previously excluded)compare_connector_catalogs.pyscript to compare GitHub connectors with Sentinel catalog