Conversation
Codecov Report
@@ Coverage Diff @@
## master #72 +/- ##
==========================================
+ Coverage 90.93% 91.06% +0.13%
==========================================
Files 14 14
Lines 1235 1254 +19
==========================================
+ Hits 1123 1142 +19
Misses 112 112
|
|
Looks good to me ! Some nice-to-have things: |
There was a problem hiding this comment.
Hey @hugofluhr, this is a very solid start, thank you so much!
A couple of additional features would be great (as @venkateshness was proposing):
- Add if statements to check that:
- Consequently, add breaking tests to the test suite
- Add a citation (the one you believe being the most relevant):
- in the docstrings (in fact, I should do the same myself!), like here
- using duecredit, in
references.py(e.g. this one - using duecredit's decorator (e.g. here)
- Don't forget to add
smoothnessin the CLI (e.g. here) and in the supported metrics list
Optionally, if you have time:
- when you add if statements, you could test whether, if 2 is false, transposing
signalfixes things (raising a warning, ofc) - in an SDI fashion (if it makes sense?), you could replicate
functional_connectivity's structure, transforming what you have now in a hidden sub-function, and then add features to compute smoothness on the whole timeseries and on the split timeseries! - Also, if
signalhas a third dimension (e.g. multisubject), check that the operations don't break!
|
Thanks for the inputs! Progress tracker :
|
|
@smoia the coverage is decreasing partly due to |
|
Unless I am missing (had a quick look on my mobile), LGF.warning is triggered by if statements in 2 cases. You can push to kick these if statements in the break tests ? |
|
If the coverage decreases a little, that's ok. Break tests are only for raising errors! |
| computed_smoothness = metrics.smoothness(laplacian, signal) | ||
|
|
||
| assert (expected_smoothness == computed_smoothness).all() | ||
|
|
There was a problem hiding this comment.
You can add an assert here with signal = rand(10) and expected_smoothness computed on signal[..., np.newaxis]. And another one with signal = rand(2,10) and expected smoothness on signal.T.
Or use chatGPT 🤣
f493ec1 to
854c047
Compare
| s3 = rand(2, 10) | ||
| laplacian = rand(10, 10) | ||
|
|
||
| expected_smoothness1 = np.dot(s1.T, np.dot(laplacian, s1)) |
There was a problem hiding this comment.
Is this correct? If so you don't need a warning and a special treatment, since it's the same as the main case!
There was a problem hiding this comment.
for s3 I need to transpose it by hand to compute the expected_smoothness otherwise the dot product won't work! See the difference between lines 81 and 84
There was a problem hiding this comment.
ah yes that's correct sorry, I'm trying to make it work with a 3rd dimension for different subjects so I'll see how this evolves as I will probably move from np.matmul to np.tensordot
There was a problem hiding this comment.
Not sure it'd fit in in memory, keep an eye on mem usage! You might have to run over subjects dim with a loop
There was a problem hiding this comment.
Tensordot might indeed not work out - Maybe you can constrain axes in the multiplication, or indeed loop through subjects. I have some code in another function (the graph fourier transform, if I remember correctly) that split cases based on dimensions (and runs a loop).
| else: | ||
| raise ValueError("The dimensions of the signal and Laplacian don't match.") | ||
|
|
||
| return np.matmul(signal.T, np.matmul(laplacian, signal)) |
There was a problem hiding this comment.
Yeaaaah... Do you remember when your function was one line? Now you understand the pain of the Sdev...
There was a problem hiding this comment.
Yes definitely, thanks for the guidance and regular feedback!
|
@hugofluhr can we merge this in? |
Closes #71
PR that add smoothness computation for signals that are defined on nodes of the graph. It's my first contribution to such a project so I'm happy to get feedback :)
Proposed Changes
operations.metrics.objects.Change Type
bugfix(+0.0.1)minor(+0.1.0)major(+1.0.0)refactoring(no version update)test(no version update)infrastructure(no version update)documentation(no version update)otherChecklist before review