You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* make README more readible and add further information
* get a global summary format also when run only on 2 single files
* update inspection interface
* minor bug fixes
At the moment, 3 different comparisons are implemented:
13
+
1. relative difference of bin contents,
14
+
1. Chi2 test,
15
+
1. simple comparison of number of entries
8
16
9
-
- the Monitor object collection we want to focus on:
10
-
QcTaskMIDDigits;
11
-
DigitQcTaskFV0;
12
-
TaskDigits;
13
-
DigitQcTaskFT0;
14
-
QcMFTAsync;
15
-
ITSTrackTask;
16
-
MatchedTracksITSTPC;
17
-
MatchingTOF;
18
-
ITSClusterTask;
19
-
Clusters;
20
-
PID;
21
-
Tracks;
22
-
Vertexing
17
+
The first 2 tests are considered critical, hence if the threshold is exceeded, the comparison result is named `BAD`.
23
18
24
-
- which compatibility test we want to perform (bit mask):
25
-
1->Chi-square;
26
-
2--> ContBinDiff;
27
-
3 (combination of 1 and 2)--> Chi-square+MeanDiff;
28
-
4-> N entries;
29
-
5 (combination of 1 and 4) --> Nentries + Chi2;
30
-
6 (combination of 1 and 2)--> N entries + MeanDiff;
31
-
7 (combination of 1, 2 and 3)--> Nentries + Chi2 + MeanDiff
19
+
There are 5 different test severities per test:
20
+
1.`GOOD` if the threshold was not exceeded,
21
+
1.`WARNING`: if a non-critical test exceeds the threshold (in this case only when comparing the number of entries),
22
+
1.`NONCRIT_NC` if the histograms could not be compared e.g. due to different binning or axis ranges **and** if the test is considered as **non-critical**,
23
+
1.`CRIT_NC` if the histograms could not be compared e.g. due to different binning or axis ranges **and** if the test is considered as **critical**,
24
+
1.`BAD` if a critical test exceeds the threshold.
32
25
33
-
- threshold values for checks on Chi-square and on content of bins
26
+
## Python wrapper and usage
34
27
35
-
- choose if we want to work on the grid or on local laptop (to be fixed)
28
+
Although the above macro can be used on its own, its application was also wrapped into a [Python script](o2dpg_release_validation.py) for convenience. By doing so, it offers significantly more functionality.
36
29
37
-
- tell the script it there are "critical "histograms (the list of names of critical plots has to be written in a txt file), which we need to keep separated from the other histograms. The corresponding plots will be saved in a separated pdf file
30
+
The full help message of this script can be seen by typing
This performs all of the above mentioned tests. If only certain tests should be performed, this can be achieved with the flags `--with-<which-test>` where `<which-test>` is one of
46
+
1.`chi2`,
47
+
1.`bincont`,
48
+
1.`numentries`.
49
+
By default, all of them are switched on.
50
+
51
+
### Apply to entire simulation outcome
52
+
53
+
In addition to simply comparing 2 ROOT files, the script offers the possibility of comparing 2 corresponding directories that contain simulation artifacts (and potentially QC and analysis results). This then automatically runs the RelVal on
54
+
1. QC output,
55
+
1. analysis results output,
56
+
1. TPC tracks output,
57
+
1. MC kinematics,
58
+
1. MC hits.
59
+
**NOTE** That each single one of the comparison types if only done if mutual files were found in the 2 corresponding directories. As an example, one could do
The latter optional argument could be a list of any of the above mentioned severities. If a directory is passed as input, it is expected that there is either a file named `SummaryGlobal.json` or - if that cannot be found - a file named `Summary.json`.
82
+
83
+
### Make ready for InfluxDB
84
+
85
+
To convert the final output to something that can be digested by InfluxDB, use
When the `--tags` argument is specified, these are injected as TAGS for InfluxDB in addition. The table name can also be specified explicitly; if not given, it defaults to `O2DPG_MC_ReleaseValidation`.
# add all tests - do it dynamically because more might be added in the future
549
576
if"test_"notink:
@@ -578,14 +605,14 @@ def main():
578
605
rel_val_parser.set_defaults(func=rel_val)
579
606
580
607
inspect_parser=sub_parsers.add_parser("inspect")
581
-
inspect_parser.add_argument("file", help="pass a JSON produced from ReleaseValidation (rel-val)")
608
+
inspect_parser.add_argument("path", help="either complete file path to a Summary.json or SummaryGlobal.json or directory where one of the former is expected to be")
0 commit comments