-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Tone equalizer 2025-04-06 preview version #18656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tone equalizer 2025-04-06 preview version #18656
Conversation
|
The detailed explanation is here now: https://discuss.pixls.us/t/tone-equalizer-proposal/49314 |
|
macos build fails |
|
Does this preserve existing edits? Just imported some existing tone equalizer presets. Applying them has no effect, therefore I would believe this version isn't backward compatible and will break existing edits. |
src/iop/toneequal.c
Outdated
| @@ -126,35 +128,60 @@ | |||
|
|
|||
| DT_MODULE_INTROSPECTION(2, dt_iop_toneequalizer_params_t) | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to update the version here to support old edits - legacy_params() only gets called if the stored version is less than the version given here (and new edits would get confused with old ones without the version bump).
Looks like a missing version bump - the code is present to convert old edits, but darktable doesn't call it since it thinks they're still the current version. |
TurboGit
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First very quick review (important as otherwise this will break old edits).
Not tested yet.
| n->quantization = 0.0f; | ||
| n->smoothing = sqrtf(2.0f); | ||
|
|
||
| // V3 params |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please keep this as new_version = 2. The legacy_params() will be called multiple time until we reach the last version. The rule here is that we never have to touch old migration code, we just add a new chunk to go 1 step to final version.
src/iop/toneequal.c
Outdated
|
|
||
| const dt_iop_toneequalizer_params_v2_t *o = old_params; | ||
| dt_iop_toneequalizer_params_v3_t *n = malloc(sizeof(dt_iop_toneequalizer_params_v3_t)); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since all your new fields are at the end of the struct, just do:
memcpy(n, o, sizeof(t_iop_toneequalizer_params_v2_t));
| n->quantization = o->quantization; | ||
| n->smoothing = o->smoothing; | ||
|
|
||
| // V3 params |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And keep only this section below:
|
Thanks for your feedback. I will provide a new version in a few days which includes the changes that were suggested here. Some advise on my two major roadblocks would be helpful:
|
|
@TurboGit please note there has been a lot of discussion in the thread on pixls. Please take that into consideration. |
|
I am late to the party and I will also keep silent again for private reasons IRL. From my perspective, I do not have many issues with ToneEQ. There is only one thing bothered me from day-one of this module:
I pretty much dislike and had long argues with the author Aurtélien Pierre by its time. He kept trying to convince me that it is not doable differently for this and that reason (as he usually behaves). That bar-indicator on masking tab did not guide me as precise as he always pretended it would be. Nowadays I have mapped those two sliders to two rotaries on my midi-board (X-touch mini) and I can attenuate while looking at the advanced tab. The luminance estimator / preserve details is another thing slightly above my head but from time to time I use it. All the rest, I barely touch After setting the mask, I hover the mouse on my image and scroll the wheel (for this I sometimes need to off/on the module as the rotaries seem to mess with the module-focus). For me the “simple” tab can just be hidden totally. Besides, please do not clutter the advanced tab. I hope my workflow (above) will not be destroyed and nor the old edits. In other words: Never change a running system |
|
@paperdigits : Yes I've seen and followed a bit the discussion, but it became so heated that it has almost no value to me at this point so I don't follow it on pixls anymore. We'll see if part of it is moved here in a PR. |
|
Existing user defined presets no longer work. As for the included presets, whenever I apply one the histogram contracts to take half of the horizontal space and moves to the left edge. If I move the histogram back where I want it and apply a preset again the same problem occurs. |
Ad 1) Nope, you must not use the parameters for keeping data while processing the module. We have the global module data, you might use that to keep data shared by all instances of a module. But you'd have to implement some locking and "per instance" on your own. Unfortunately we currentlx don't have "global_data_per_instance" available. Ad 2) Yes, exports and the cli interface. |
|
@wpferguson Are you sure that the settings were the same? The version in this PR had the bug with the version number introspection, so it did not convert existing settings correctly. It is probably better to wait for a new version to test this. |
So it probably does not make sense at all to depend on values that come from the GUI during export. The core problem is the scale-dependence of the guided filter. An alternative approach for exporting would be to create a downscaled copy of the image (sized like the GUI preview), apply the GF, get the values I need and then apply them to the the full image - essentially simulating the preview calculation. Not the most efficient approach, but it would only be needed during export when auto_align is used. |
|
Not sure I understood the last post correctly, but if I did, in my opinion there would be a problem. Firstly, for the CLI case, where there is no GUI, how can the GF computation be done on a downscaled image sized like the GUI preview? But also if there is a GUI, I think the export result must not depend on the arbitrary size of the darktable window (and thus the preview size) at the time the export is done. (Later exports of the image with unchanged history stack and same export settings must always yield identical results.) |
Definitely not, it won't generally work.
I didn't check /review your code in it's current state but are you sure you did setup correctly? There are some issues but we use feathering guide all over using masks and results are pretty stable to me. About keeping data/results in per-module-instance, this has daunting me too on some other project. Will propose a solution pretty soon that might help you ... |
As far as I understand it, the preview thread calculates the navigation image shown in the top left of the GUI. However the actual image that the thread sees is much bigger (something like 1000px wide), so the navigation image must be a downscaled version. I hope (but have not yet checked) that the size of this internal image is constant. |
The code in the PR is outdated and has known problems, not much use looking at it now. I will update it as soon as I have a somewhat consistent state.
Of course it is possible that I might have broken something, but as far as I am aware, I did not change anything about the guided filter calculation but only modified what happens with the results.
This sounds nice, but my next attempt will be trying to avoid this. |
|
Note that your code can figure out how much the image it has been given has been downscaled in order to determine scaled radii and the like to simulate appropriate results. A bunch of modules with scale-dependent algorithms do this. It isn't perfect, but does yield pretty stable and predictable results. Look for |
|
I have been playing around with the tone equalizer some more and ran into a bit of a roadblock. To recap, what I do is: Image => Grayscale + Guided filter => Mask => Mask Histogram => Percentile values => Auto Scale/Shift values The buffers in the different pipelines (PREVIEW with GUI and the export) have different sizes and the guided filter is scale-dependent, so when I make statistics over the mask, it is expected that there is a systematic error. The final calculated values deviate so much that there is a visible difference between the image that is shown in DT and the export. My idea to overcome this was as to downscale the image during export to the same size as the preview and use the downscaled version to calculate the statistics (essentially simulating the preview pipe during export). However the results were still different even though the calculation was done with images of the same size and with the exact same parameters. Then I added debugging code to write the internal buffers into files and I discovered the reason:
However the left image (PREVIEW) is still a lot blurrier than my downscaled version. I would have expected them to be mostly the same. Using a blurry version of the image is not ideal for my purposes. However the bigger problem is that I need to know, what happened to the image in the preview pipe, if I want to replicate the same steps in the export pipe. I tried to find relevant parts in the DT code, but had no success so far. I am also interested in the calculation of the size of the PREVIEW pipe image, it seems like the height fluctuates the least and is between 880 and 900 pixels. |
|
The demosaic module downscales if the requested output dimensions to the following module are small enough. FULL is almost certainly getting more data out of demosaic than PREVIEW, which reflects in the sharper image after downscaling. Run with |
7c707c2 to
64fc0de
Compare
|
I finally have a version that I consider good enough to show it publicly. It would be nice if someone would take the time to look at my code, i.e. there are a few "TODO MF" markers with open questions. The module should be (if there are no bugs) compatible with old edits. I have checked that with parameters "align: custom, scale: 0, shift: 0" the results are the same as in 5.0.1. Most of my changes were not that much effort. Scaling the curve after the guided filter, coloring the curve, changing the UI (even though it still has a few issues) was not that difficult. The one thing that was hard and got me stuck for weeks is the auto alignment feature:
The data that is available to the pixelpipes during GUI operations is different from the export. The FULL pipe may not see the whole image, so it is not suitable to calculate mask exposure/contrast. The PREVIEW pipe sees the image completely, but it is scaled-down and pretty blurry. The guided filter is sensitive to both of these effects, so statistics on the PREVIEW luminance mask deviate significantly from statistics of the whole image. The (unfortunately not so nice) solution this problem is to request the full image in PIXELPIPE_FULL when necessary. Of course this has an impact on performance. However in practice I found it acceptable and it only occurs when the user explicitly requests auto-alignment of the mask - so users who use the module like before should not experience performance degradation (unless I accidentally broke something, i.e. OpenMP). Known issues:
Other notes: I have seen #18722 and to be honest I am not sure what it does exactly, but I wondered if it would be possible to give modules like tone equalizer access to an "early stable" version of the image - a version from after demosaic, but before any modules that users typically use to change the image. If this was possible, I would switch the calculating the luminance mask to this input, if the user requests auto-alignment, so re-calculating the mask would not be necessary that often. Providing modules with access to an early version of the image would also have other use-cases, like stable parametric masks that don't depend on the input of their respective module. |
And only once when the auto-align is changed, right?
Indeed, you can even resize to negative size of the graph :)
Indeed, we need back the hierarchical presets naming. Should be easy.
A question for @jenshannoschwalm I suppose. During my testing I found the auto-align a very big improvement indeed. I would probably have used the fully-fit as default or maybe the "auto-align at mid-tone". BTW, since the combo label is "auto align mask exposure" I would use simplified naming for the entries:
|
TurboGit
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some changes proposed while doing a first review.
To me the new ToneEQ UI is nice and better than what we have in current master.
I would suggest to address the remaining minor issues (crash when resizing the graph) + the proposed changes here and do a clean-up of the code to remove dead-code and/or commented out code. Probably also removing the debug code and I'll do a second review.
From there we should also have some other devs testing it, such drastic changes may raise some issues with others. Again to me that's a change in the right direction.
| n->quantization = 0.0f; | ||
| n->smoothing = sqrtf(2.0f); | ||
|
|
||
| *new_params = n; | ||
| *new_params_size = sizeof(dt_iop_toneequalizer_params_v2_t); | ||
| *new_version = 2; | ||
| *new_params_size = sizeof(dt_iop_toneequalizer_params_v3_t); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this is wrong, we do a one step update from v1 to v2. As said previously we don't want to change older migration path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As said previously, this change should be reverted. We want to keep the step migration from v1 to v2.
But that doesn’t request a big redesign. I compare it to the relocation of mask adjustment sliders: sometimes a couple of small changes are more effective than a redesign of a module which isn’t broken or bearing a couple of technical debts. about the curve: if it’s wobbly then it might be an indication why a strange result in the image occurs after moving a slider. So you‘re able to correctit. But keep in mind: tone equalizers purpose isn’t creating a proper curve ;) |
Sure, I'm trying to add input for @marc-fouquet which is asking about a minimal set of changes that would be accepted by all, not controversial. I think that this little annoyance with current TE is a good thing to have in mind and possible fixed soon. |
Could you be precise about what you mean here. What I'm saying is that mask setup is not an obvious thing and not every adjustment can be done with a single magic mask setup. If it was the equivalent of clicking both the "exposure" and "contrast" compensation wands, then I object, because I almost never use the contrast compensation auto and I very often tweak the auto-exposure one My impression is that most people's concern is exactly this part of the change |
|
@elstoc : I'm not sure how to state this. Just use current TE as I said:
I want 3 & 4 to be done in one step IF this is the first use of TE and the mask has not been setup manually. Having 3 & 4 as it is today is a pain and a plain wrong UX to me. EDIT: And moreover I'd like to have the mask to be spread out the whole exposure range (again if no manual adjustments have been done). |
maybe i just have luck with my apple magic mouse - so these two scroll "clicks" feels like simply one slide with the finger - it's harder to do just step 3 ;) |
|
@TurboGit Again you're just saying "the mask is setup" as if it is obvious that there's only one way to do this. Mask spread over the histogram is not the only way to do it, and IMO is often not important at all (I often just want to adjust the highlights). We also have to deal with the actual structure of the mask (contrast, edge detection etc.). So that simple phrase is doing a lot of heavy lifting and will be controversial unless we're really sure what we want and that it's achieving the right thing. |
|
@elstoc : Seems like we do not understand each other, or you are not reading all I said. I said: "adjust the mask if not already manually done". That is, in you case of you want to adjust the mask yourself manually, fine. I do that also from time to time. But in many cases - for me - just using the default mask + clicking on the two auto-adjustment to spread out the histogram it works fine. That's why I think an initial auto-mask-histogram setup may well be a good start for most people in many cases. And again, if you want to adjust the mask-histogram, fine as you already do that it won't be reset. So it seems to me that your workflow is preserved, no? |
|
"I think an initial auto-mask-histogram setup may well be a good start for most people in many cases." I would agree with this if it can be implemented. |
Well since, as I say, I don't like to use the contrast adjustment slider, it feels like I would have to manually undo that if it's going to be auto-activated when activating the module. Perhaps I'm not understanding when you think this automatic histogram setting will happen. Because I either want it not to happen at all, or to not adjust the contrast. Perhaps a single button to adjust both (similar to how we have in filmic) would be enough? Bear in mind not everyone uses scrolling over the image to adjust |
|
Let's hope there's a misunderstanding because I'm 100% with @elstoc: Turning on the tone-equalizer for the first time should not do anything by itself. |
So do not use it as it is already what is done on current master. |
Definitely some language barrier issues here because I don't know what you mean by this |
|
I have already described what TE does here #18656 (comment) To work it needs an histogram and this is computed at the second mouse scroll. Saying that it must done NOTHING to work is plain wrong. TE needs an histogram to work and this must be computed/setup somehow. |
Mmm... not the kind of reaction/reply I expect here, but OK. If I turn on the tone-equalizer an initial mask is computed based on the starting values. These starting values are all 0.00. That is your starting point and, yes, that'll be wrong 99% of the time. This is the point you start dialling in what you want/need. If I understand correctly you want to make the tone-equalizer auto-compute the "correct" exposure and contrast mask settings. If that is what you want then both Elstoc and I are against this (@elstoc correct me if I'm wrong). If this new TE is going to be merged I don't mind if there's going to be an Auto button that changes a bunch of settings, but that should not be an automation that is activated when first starting this module. I hope I made clear what I mean and hope that I understood correctly what you wanted to communicate. EDIT: I'm also going to be stepping away from this conversation for a while. My replies make my point of view rather clear and I'm not willing to get worked up about all this. |
|
Ok, so I think I see the issue here. @TurboGit you're just saying it should work exactly as it does at the moment but just so that you don't need to do two scrolls to get started adjusting the curve (I assumed you meant auto-spreading the mask as well)? I think this needs two steps because the module needs to be both expanded and enabled for it to start working (I guess it won't process the incoming pipeline data until it's enabled?), and the first scroll causes the module to switch on? |
|
@elstoc : Yes, exactly that. Communication is sometime very hard ! |
|
Happy to test/review that as a standalone PR but I'd like to see what code is actually achieving this alongside the testing and it's hard with all the other stuff going on here. This is also the approach I'd recommend for getting other features merged - make them as small as possible and do them one-at-a-time. Part of the problem with this PR has just been the number of simultaneous changes going on (and hence simultaneous/overlapping discussions about them) |
|
@elstoc : I agree, let start back with small incremental PR if this is ok with @marc-fouquet too. |
@jenshannoschwalm, @marc-fouquet I'm looking at this from a support point of view. The supposition is that the user's will read the manual and figure out what is going wrong, if anything. But, we know the user's wont read the manual. They will raise an issue asking why it's happening, why it's happening if nothing is wrong, what does it mean, etc, etc, etc... And, you know the user's will check the "I've checked..." box because they don't want to bother or they don't know how, or they can''t figure out the correct term to search for. And, if we tell them nothing is wrong.... |
|
This is way too much UI change to have in one chunk without some conceptualization of what should be changed; its a solution in search of a problem. Some issues:
I'd not want any of these UI changes. |
|
@wpferguson it seems i disagree here :-) Also from a support point - people will notice the warning and will start to think (hopefully) BTW we have such visual warnings at more places. Just one example - the agx curve. Why do we "warn" about "non-sigmoid" ? Also the current TE has such visual warnings and that's very good to me. BTW i simply don't get why there is so much "heat" in the discussion. Yes - this PR is almost impossible to follow and i stopped a while ago trying to do so. And I think some initial ideas are simply not good but ... About the curve. There is certainly a (minor) problem with current TE at harsh transitions. AP tried to mitigate as good as possible and it's already very good. But - the way we do the curve is related to those problems (at least that's how i remember from my last reading the code in detail many months ago). So - it seems very worthwhile to investigate that. I also remember the time i was fighting with fixing issues with color equalizer brightness and saturation using the available curve. Undershoots are very bad for luminance correction. So i was very surprised to read such strong opinions and i am absolutely not sure those comments are based on code reading, maths or whatever. CE mitigates those transition problems very well - at least i think so - and if marc starts a special PR about that i would certainly be looking into that in detail and have some ideas about that. Don't misunderstand me, i am totally fine with the TE module as is and would personally not want to spend time and efforts on it. But i fully appreciate @marc-fouquet's work on this and am very interested what parts of that will land in master as a definitive improvement. |
Because this is supposed to REPLACE the existing tone equalizer which isn't broken, but will be for a lot of users, including me, if it does get merged. As some above said it's a solution in search of a problem. If it's merged in it's current form it will BREAK my workflow and then I will spend days/hours/weeks trying to work around the "fixed" tone equalizer. If it would get merged as a separate module (tone mechanic or tone toolbox or whatever) then I'm pretty sure there would not be much of a discussion. So far @TurboGit is the only one that can demonstrate a reproducible problem that should be fixed, but that's not in the PR. The things in the PR that are "fixed" I don't see/can't reproduce and I use tone equalizer a lot (multiple instances in each image, 200+ images/week). EDIT: My thought is that this can be merged as a separate module and then those who want it will be happy and those who don't will be happy. Or, someone can be unhappy. |
|
"The supposition is that the user's will read the manual and figure out what is going wrong, if anything." This is why I suggested putting the meaning of the warning in the tool tip. If users see the red line and wonder "what does that mean?", they only need to read the tool tip to find out. Someone above suggested we don't need the warnings as we can see the effect with our own eyes. But some of the gradient issues I noticed were only obvious at 100%, and I was making TE adjustments zoomed out, therefore missed them. Coloured warnings would help with this. Also, a module like filmic gives coloured warnings for when the curve inverses, which is helpful and I've never seen anyone complain about. So if it's done one place, why not another? |
if the mask matches your expectations then it's not undesirable - did you check the generated mask? Keep in mind, it's not about best fitting of a mask histogram but of generating a mask that supports your editing intention, meaning proper separation of the tonal ranges you want to edit differently. |
|
Just closing this and leaving the project. Too tired of fighting. |
|
@marc-fouquet : That's your decision, but I'm sad as there was some really nice enhancement in there. |












This is a preview of my proposed changes to the tone equalizer module. There is a lot of debug code still in it and there are known problems, see below.
This message only contains code aspects. I will write a detailed post from a user perspective for pixls.us.
Changes
post_scale,post_shiftandpost_auto_alignsettings, which allow adjusting mask exposure and contrast AFTER the guided filter calculation.post_scaleandpost_shiftare derived from this histogram.post_auto_align=custom,post_scale=0,post_shift=0, the results are the same as with the old tone equalizer (I was not able to get a byte-identical export, but in GIMP the difference between my version and 5.0 was fully black in my tests).dt_dev_pixelpipe_piece_hashtodt_dev_hash_plusafter I noticed that the module constantly re-computed the guided filter, even though this was not necessary.compute_lut_correctiontocompute_gui_curve) and sorted the functions. The consequence is that I have touched almost all lines of code, so diffs will not be helpful in tracking my changes.Known issues
Tbh., I had more problems with the anatomy of a darktable module (threads, params, data, gui-data, all the GTK stuff) than with the actual task at hand.
Known problems are:
post_auto_alignis used to set the mask exposure, the values forpost_scaleandpost_shiftare calculated in PREVIEW and used in FULL. However, other pixel pipes (especially export) calculate the mask exposure on their own and may get a different result that leads to a different output.Things I noticed about the module (a.k.a issues already present in 5.0)
Related Discussion
Issue #17287