-
Notifications
You must be signed in to change notification settings - Fork 1
Summary csv file #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Summary csv file #21
Conversation
landrews-amd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, one small suggestion
| summary_parser.add_argument( | ||
| "--summary_path", | ||
| dest="summary_path", | ||
| type=log_path_arg, | ||
| help="Path to node-scraper results. Generates summary csv file in summary.csv.", | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be good to have separate args for search path vs output path.
- Search path would be the location of the result files to process
- output path would be where the summary is written to (default as cwd)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexandraBara what do you think of this suggested approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is good, ill update
|
@alexandraBara , I might have seen it already, but please remind me how the |
|
@alexandraBara, Does status=OK mean that output of a plugin is equal to the corresponding config file entry? |
it would be the results of 1 run of node-scraper, in case of the summary file above one of those files is this: notice its the same node, same timestamp. Also i am going to change the name from errorscraper.csv to nodescraper.csv |
yes, if you ran this command: and the result says OK, then the data collected matches the data expected from myconfig.json. the result saying OK means the collected data passed the analysis phase. So really the meaning "OK" depends on how you ran the tool. |
nodescraper/cli/helper.py
Outdated
| fieldnames = ["nodename", "plugin", "status", "timestamp", "message"] | ||
| all_rows = [] | ||
|
|
||
| pattern = os.path.join(base_path, "**", "errorscraper.csv") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be nodescraper.csv
landrews-amd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One last small suggestion, otherwise LGTM
| logger.error("No data rows found in matched CSV files.") | ||
| return | ||
|
|
||
| if not output_path: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If output path can be None, the type hint should be updated accordingly to Optional[output_path]
Generate summary csv file by combining result csv files from previous node-scraper runs.
Sample run:
this will generate a new file home/alexbara/node-scraper/summary.csv.
Sample summary.csv file: