Skip to content

Conversation

@marc-fouquet
Copy link
Contributor

This is a preview of my proposed changes to the tone equalizer module. There is a lot of debug code still in it and there are known problems, see below.

This message only contains code aspects. I will write a detailed post from a user perspective for pixls.us.

Changes

  • Introduced post_scale, post_shift and post_auto_align settings, which allow adjusting mask exposure and contrast AFTER the guided filter calculation.
    • The histogram calculation is now split into two steps. First, a very detailed histogram is calculated over a large dynamic range. The GUI histogram and internal parameters used for automatically computing post_scale and post_shift are derived from this histogram.
    • The new parameters are not actually applied to the luminance mask, but to the look-up table that is used to modify the image.
    • With post_auto_align=custom, post_scale=0, post_shift=0, the results are the same as with the old tone equalizer (I was not able to get a byte-identical export, but in GIMP the difference between my version and 5.0 was fully black in my tests).
  • Changed upstream pipe change detection from dt_dev_pixelpipe_piece_hash to dt_dev_hash_plus after I noticed that the module constantly re-computed the guided filter, even though this was not necessary.
  • Added experimental coloring to the curve in the GUI, it now turns orange or red when the user does something that is probably not a good idea:
    • Raising shadows/lowering highlights with the guided filter turned off.
    • Lowering shadows/raising highlights with the guided filter turned on. The user probably expects a gain in contrast here, but the guided filter will work against this.
    • Setting the downward slope of the curve to be too steep.
  • UI changes:
    • Sliders (previously on the "simple" page) are now located in a collapsible section beneath the histogram.
    • Made the histogram/curve graph resizable (see issues!).
  • In my efforts to understand the code, I renamed things that were named confusingly in my opinion (i.e. compute_lut_correction to compute_gui_curve) and sorted the functions. The consequence is that I have touched almost all lines of code, so diffs will not be helpful in tracking my changes.

Known issues

Tbh., I had more problems with the anatomy of a darktable module (threads, params, data, gui-data, all the GTK stuff) than with the actual task at hand.

Known problems are:

  • Pulling the histogram/curve graph small causes DT to crash. I am clearly still missing something here. There is also no horizontal bar on mouseover to indicate that the graph can be resized.
  • When post_auto_align is used to set the mask exposure, the values for post_scale and post_shift are calculated in PREVIEW and used in FULL. However, other pixel pipes (especially export) calculate the mask exposure on their own and may get a different result that leads to a different output.

Things I noticed about the module (a.k.a issues already present in 5.0)

  • Resetting the module multiple times makes the histogram disappear until the user moves a slider.

Related Discussion

Issue #17287

@marc-fouquet
Copy link
Contributor Author

The detailed explanation is here now: https://discuss.pixls.us/t/tone-equalizer-proposal/49314

@MStraeten
Copy link
Collaborator

macos build fails

~/src/darktable/src/iop/toneequal.c:1343:94: fatal error: format specifies type 'long' but the argument has type 'dt_hash_t' (aka 'unsigned long long') [-Wformat]
 1343 |     printf("toneeq_process PIXELPIPE_PREVIEW: hash=%ld saved_hash=%ld luminance_valid=%d\n", current_upstream_hash, saved_upstream_hash,
      |                                                    ~~~                                       ^~~~~~~~~~~~~~~~~~~~~
      |                                                    %llu
1 error generated.

@wpferguson
Copy link
Member

wpferguson commented Apr 6, 2025

Does this preserve existing edits?

Just imported some existing tone equalizer presets. Applying them has no effect, therefore I would believe this version isn't backward compatible and will break existing edits.

@@ -126,35 +128,60 @@

DT_MODULE_INTROSPECTION(2, dt_iop_toneequalizer_params_t)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to update the version here to support old edits - legacy_params() only gets called if the stored version is less than the version given here (and new edits would get confused with old ones without the version bump).

@ralfbrown
Copy link
Collaborator

Does this preserve existing edits?

Looks like a missing version bump - the code is present to convert old edits, but darktable doesn't call it since it thinks they're still the current version.

Copy link
Member

@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First very quick review (important as otherwise this will break old edits).

Not tested yet.

n->quantization = 0.0f;
n->smoothing = sqrtf(2.0f);

// V3 params
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please keep this as new_version = 2. The legacy_params() will be called multiple time until we reach the last version. The rule here is that we never have to touch old migration code, we just add a new chunk to go 1 step to final version.


const dt_iop_toneequalizer_params_v2_t *o = old_params;
dt_iop_toneequalizer_params_v3_t *n = malloc(sizeof(dt_iop_toneequalizer_params_v3_t));

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since all your new fields are at the end of the struct, just do:

memcpy(n, o, sizeof(t_iop_toneequalizer_params_v2_t));

n->quantization = o->quantization;
n->smoothing = o->smoothing;

// V3 params
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And keep only this section below:

@marc-fouquet
Copy link
Contributor Author

Thanks for your feedback. I will provide a new version in a few days which includes the changes that were suggested here.

Some advise on my two major roadblocks would be helpful:

  1. The auto-alignment (post_scale/post_shift) is calculated in PIXELPIPE_PREVIEW and I would like to use the results in all pipes. It is easy to get this data to the main window (PIXELPIPE_FULL) by storing it in g. However I also need a way to apply the same values to exports (which don't have g). Since the guided filter is not scale-invariant, the results would differ somewhat if I re-calculated the alignment with pipe-local data during export.
  • Does it make sense to store this data in p (from process running PIXELPIPE_PREVIEW) using fields that are not associated with GUI elements? These fields could be floats that are initialized to NaN and replaced with real values when they are available, commit_params would copy them to d.
  • Are there cases in which an image has never seen a GUI at all, but still needs to process in a pipe correctly?
  1. The problem with resizing the graph. I have only done two things:
  • In gui_init
  g->area = GTK_DRAWING_AREA(dt_ui_resize_wrap(NULL,
    0,
    "plugins/darkroom/toneequal/graphheight"));
  • and in _init_drawing
  g->allocation.height -= DT_RESIZE_HANDLE_SIZE;
  • After that, resizing the graph works, but no handle is displayed and it is possible to drag the graph too small, which results in a crash.

@paperdigits
Copy link
Collaborator

@TurboGit please note there has been a lot of discussion in the thread on pixls. Please take that into consideration.

@AxelG-DE
Copy link

I am late to the party and I will also keep silent again for private reasons IRL.

From my perspective, I do not have many issues with ToneEQ.

There is only one thing bothered me from day-one of this module:

  • mask exposure/contrast compensation sliders are on masking tab
  • histogram is on the advanced tab
  • For precise attenuation one needs to forward/backward jump between those two tabs.

I pretty much dislike and had long argues with the author Aurtélien Pierre by its time. He kept trying to convince me that it is not doable differently for this and that reason (as he usually behaves). That bar-indicator on masking tab did not guide me as precise as he always pretended it would be.

Nowadays I have mapped those two sliders to two rotaries on my midi-board (X-touch mini) and I can attenuate while looking at the advanced tab.

The luminance estimator / preserve details is another thing slightly above my head but from time to time I use it.

All the rest, I barely touch

After setting the mask, I hover the mouse on my image and scroll the wheel (for this I sometimes need to off/on the module as the rotaries seem to mess with the module-focus).

For me the “simple” tab can just be hidden totally. Besides, please do not clutter the advanced tab.

I hope my workflow (above) will not be destroyed and nor the old edits.

In other words: Never change a running system
Thank you!

@TurboGit
Copy link
Member

@paperdigits : Yes I've seen and followed a bit the discussion, but it became so heated that it has almost no value to me at this point so I don't follow it on pixls anymore. We'll see if part of it is moved here in a PR.

@wpferguson
Copy link
Member

Existing user defined presets no longer work.

As for the included presets, whenever I apply one the histogram contracts to take half of the horizontal space and moves to the left edge. If I move the histogram back where I want it and apply a preset again the same problem occurs.

@wpferguson
Copy link
Member

Showing the mask with preserve details shows no difference between no, EIGF, and averaged EIGF.

Masks are vastly different between tone equalizer on current master and this PR.

New masks...

new 1
new 2
new 3
new 4
new 5

versus current master

old 1
old 2
old 3
old 4
old 5

@jenshannoschwalm
Copy link
Collaborator

  • Does it make sense to store this data in p (from process running PIXELPIPE_PREVIEW) using fields that are not associated with GUI elements? These fields could be floats that are initialized to NaN and replaced with real values when they are available, commit_params would copy them to d.

  • Are there cases in which an image has never seen a GUI at all, but still needs to process in a pipe correctly?

Ad 1) Nope, you must not use the parameters for keeping data while processing the module. We have the global module data, you might use that to keep data shared by all instances of a module. But you'd have to implement some locking and "per instance" on your own. Unfortunately we currentlx don't have "global_data_per_instance" available.

Ad 2) Yes, exports and the cli interface.

@marc-fouquet
Copy link
Contributor Author

@wpferguson Are you sure that the settings were the same?

The version in this PR had the bug with the version number introspection, so it did not convert existing settings correctly. It is probably better to wait for a new version to test this.

@marc-fouquet
Copy link
Contributor Author

  • Are there cases in which an image has never seen a GUI at all, but still needs to process in a pipe correctly?

Ad 2) Yes, exports and the cli interface.

So it probably does not make sense at all to depend on values that come from the GUI during export.

The core problem is the scale-dependence of the guided filter. An alternative approach for exporting would be to create a downscaled copy of the image (sized like the GUI preview), apply the GF, get the values I need and then apply them to the the full image - essentially simulating the preview calculation. Not the most efficient approach, but it would only be needed during export when auto_align is used.

@rgr59
Copy link

rgr59 commented Apr 12, 2025

Not sure I understood the last post correctly, but if I did, in my opinion there would be a problem.

Firstly, for the CLI case, where there is no GUI, how can the GF computation be done on a downscaled image sized like the GUI preview? But also if there is a GUI, I think the export result must not depend on the arbitrary size of the darktable window (and thus the preview size) at the time the export is done. (Later exports of the image with unchanged history stack and same export settings must always yield identical results.)

@jenshannoschwalm
Copy link
Collaborator

So it probably does not make sense at all to depend on values that come from the GUI during export.

Definitely not, it won't generally work.

The core problem is the scale-dependence of the guided filter.

I didn't check /review your code in it's current state but are you sure you did setup correctly? There are some issues but we use feathering guide all over using masks and results are pretty stable to me.

About keeping data/results in per-module-instance, this has daunting me too on some other project. Will propose a solution pretty soon that might help you ...

@marc-fouquet
Copy link
Contributor Author

Firstly, for the CLI case, where there is no GUI, how can the GF computation be done on a downscaled image sized like the GUI preview? But also if there is a GUI, I think the export result must not depend on the arbitrary size of the darktable window (and thus the preview size) at the time the export is done. (Later exports of the image with unchanged history stack and same export settings must always yield identical results.)

As far as I understand it, the preview thread calculates the navigation image shown in the top left of the GUI. However the actual image that the thread sees is much bigger (something like 1000px wide), so the navigation image must be a downscaled version. I hope (but have not yet checked) that the size of this internal image is constant.

@marc-fouquet
Copy link
Contributor Author

I didn't check /review your code in it's current state

The code in the PR is outdated and has known problems, not much use looking at it now. I will update it as soon as I have a somewhat consistent state.

but are you sure you did setup correctly? There are some issues but we use feathering guide all over using masks and results are pretty stable to me.

Of course it is possible that I might have broken something, but as far as I am aware, I did not change anything about the guided filter calculation but only modified what happens with the results.

About keeping data/results in per-module-instance, this has daunting me too on some other project. Will propose a solution pretty soon that might help you ...

This sounds nice, but my next attempt will be trying to avoid this.

@ralfbrown
Copy link
Collaborator

Note that your code can figure out how much the image it has been given has been downscaled in order to determine scaled radii and the like to simulate appropriate results. A bunch of modules with scale-dependent algorithms do this. It isn't perfect, but does yield pretty stable and predictable results.

Look for piece->iscale and roi_in->scale in e.g. sharpen.c and diffuse.c. Most modules access these in process(), but it looks like tone equalizer actually accesses and caches this info in modify_roi_in().

@marc-fouquet
Copy link
Contributor Author

I have been playing around with the tone equalizer some more and ran into a bit of a roadblock.

To recap, what I do is:

Image => Grayscale + Guided filter => Mask => Mask Histogram => Percentile values => Auto Scale/Shift values

The buffers in the different pipelines (PREVIEW with GUI and the export) have different sizes and the guided filter is scale-dependent, so when I make statistics over the mask, it is expected that there is a systematic error. The final calculated values deviate so much that there is a visible difference between the image that is shown in DT and the export.

My idea to overcome this was as to downscale the image during export to the same size as the preview and use the downscaled version to calculate the statistics (essentially simulating the preview pipe during export). However the results were still different even though the calculation was done with images of the same size and with the exact same parameters.

Then I added debugging code to write the internal buffers into files and I discovered the reason:

github

  • The image on the left is a crop from the input of the PREVIEW pipeline.
  • The image on the right is from the export pipeline. The input buffer was downscaled to the same size as PREVIEW using dt_interpolation_resample. I tested the different interpolation types, this example uses DT_INTERPOLATION_BILINEAR, which should in my understanding be the least sharp option.

However the left image (PREVIEW) is still a lot blurrier than my downscaled version. I would have expected them to be mostly the same.

Using a blurry version of the image is not ideal for my purposes. However the bigger problem is that I need to know, what happened to the image in the preview pipe, if I want to replicate the same steps in the export pipe.

I tried to find relevant parts in the DT code, but had no success so far. I am also interested in the calculation of the size of the PREVIEW pipe image, it seems like the height fluctuates the least and is between 880 and 900 pixels.

@ralfbrown
Copy link
Collaborator

The demosaic module downscales if the requested output dimensions to the following module are small enough. FULL is almost certainly getting more data out of demosaic than PREVIEW, which reflects in the sharper image after downscaling. Run with -d perf to see the image sizes being passed along the pixelpipe.

@marc-fouquet marc-fouquet force-pushed the 2025-04_toneequal_preview branch from 7c707c2 to 64fc0de Compare May 1, 2025 08:32
@marc-fouquet
Copy link
Contributor Author

I finally have a version that I consider good enough to show it publicly. It would be nice if someone would take the time to look at my code, i.e. there are a few "TODO MF" markers with open questions.

The module should be (if there are no bugs) compatible with old edits. I have checked that with parameters "align: custom, scale: 0, shift: 0" the results are the same as in 5.0.1.

Most of my changes were not that much effort. Scaling the curve after the guided filter, coloring the curve, changing the UI (even though it still has a few issues) was not that difficult. The one thing that was hard and got me stuck for weeks is the auto alignment feature:

  • If requested by the user, the module should determine the mask exposure and contrast automatically.
  • These values should automatically adapt to upstream image changes.
  • The result should be the same during GUI operations and during export.

The data that is available to the pixelpipes during GUI operations is different from the export. The FULL pipe may not see the whole image, so it is not suitable to calculate mask exposure/contrast.

The PREVIEW pipe sees the image completely, but it is scaled-down and pretty blurry. The guided filter is sensitive to both of these effects, so statistics on the PREVIEW luminance mask deviate significantly from statistics of the whole image.

The (unfortunately not so nice) solution this problem is to request the full image in PIXELPIPE_FULL when necessary. Of course this has an impact on performance. However in practice I found it acceptable and it only occurs when the user explicitly requests auto-alignment of the mask - so users who use the module like before should not experience performance degradation (unless I accidentally broke something, i.e. OpenMP).

Known issues:

  • UI graph resizing is still broken. It is possible to drag the graph too small and crash the program.
  • In the auto-align case, the UI needs both PIXELPIPE_PREVIEW and PIXELPIPE_FULL to be completed to draw the histogram. If one is missing (which can easily happen) no histogram is drawn. (Even in 5.0.1 there are similar situations, i.e. no histogram ist drawn after resetting the module twice.)
  • My version has been forked from master a while ago. I have looked through the changes on master and applied most of them. The folders for the default presets are still missing, but these are not a problem. The upstream change detection (hash) has been modified, in my version I have also noticed that the original code did not work correctly, but my fix is different.

Other notes:

I have seen #18722 and to be honest I am not sure what it does exactly, but I wondered if it would be possible to give modules like tone equalizer access to an "early stable" version of the image - a version from after demosaic, but before any modules that users typically use to change the image. If this was possible, I would switch the calculating the luminance mask to this input, if the user requests auto-alignment, so re-calculating the mask would not be necessary that often.

Providing modules with access to an early version of the image would also have other use-cases, like stable parametric masks that don't depend on the input of their respective module.

@TurboGit
Copy link
Member

TurboGit commented May 3, 2025

However in practice I found it acceptable and it only occurs when the user explicitly requests auto-alignment of the mask

And only once when the auto-align is changed, right?

  • UI graph resizing is still broken. It is possible to drag the graph too small and crash the program.

Indeed, you can even resize to negative size of the graph :)

  • The folders for the default presets are still missing, but these are not a problem.

Indeed, we need back the hierarchical presets naming. Should be easy.

I have seen #18722 and to be honest I am not sure what it does exactly, but I wondered if it would be possible to give modules like tone equalizer access to an "early stable" version of the image - a version from after demosaic, but before any modules that users typically use to change the image. If this was possible, I would switch the calculating the luminance mask to this input, if the user requests auto-alignment, so re-calculating the mask would not be necessary that often.

A question for @jenshannoschwalm I suppose.

During my testing I found the auto-align a very big improvement indeed. I would probably have used the fully-fit as default or maybe the "auto-align at mid-tone". BTW, since the combo label is "auto align mask exposure" I would use simplified naming for the entries:

  • custom
  • at shadows
  • at mid-tones
  • at hightlights
  • fully fit

Copy link
Member

@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some changes proposed while doing a first review.

To me the new ToneEQ UI is nice and better than what we have in current master.

I would suggest to address the remaining minor issues (crash when resizing the graph) + the proposed changes here and do a clean-up of the code to remove dead-code and/or commented out code. Probably also removing the debug code and I'll do a second review.

From there we should also have some other devs testing it, such drastic changes may raise some issues with others. Again to me that's a change in the right direction.

n->quantization = 0.0f;
n->smoothing = sqrtf(2.0f);

*new_params = n;
*new_params_size = sizeof(dt_iop_toneequalizer_params_v2_t);
*new_version = 2;
*new_params_size = sizeof(dt_iop_toneequalizer_params_v3_t);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this is wrong, we do a one step update from v1 to v2. As said previously we don't want to change older migration path.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As said previously, this change should be reverted. We want to keep the step migration from v1 to v2.

@TurboGit TurboGit added this to the 5.2 milestone May 3, 2025
@TurboGit TurboGit added the priority: medium core features are degraded in a way that is still mostly usable, software stutters label May 3, 2025
@MStraeten
Copy link
Collaborator

  • 1st mouse scroll - the mask is setup and the editing starts

But that doesn’t request a big redesign. I compare it to the relocation of mask adjustment sliders: sometimes a couple of small changes are more effective than a redesign of a module which isn’t broken or bearing a couple of technical debts.

about the curve: if it’s wobbly then it might be an indication why a strange result in the image occurs after moving a slider. So you‘re able to correctit. But keep in mind: tone equalizers purpose isn’t creating a proper curve ;)

@TurboGit
Copy link
Member

But that doesn’t request a big redesign.

Sure, I'm trying to add input for @marc-fouquet which is asking about a minimal set of changes that would be accepted by all, not controversial. I think that this little annoyance with current TE is a good thing to have in mind and possible fixed soon.

@elstoc
Copy link
Contributor

elstoc commented Dec 19, 2025

the mask is setup

Could you be precise about what you mean here. What I'm saying is that mask setup is not an obvious thing and not every adjustment can be done with a single magic mask setup. If it was the equivalent of clicking both the "exposure" and "contrast" compensation wands, then I object, because I almost never use the contrast compensation auto and I very often tweak the auto-exposure one

My impression is that most people's concern is exactly this part of the change

@TurboGit
Copy link
Member

TurboGit commented Dec 19, 2025

@elstoc : I'm not sure how to state this. Just use current TE as I said:

  1. Expand TE
  2. Fly over the picture
  3. Do one scroll - the mask is setup
  4. Do another scroll - the adjustment is done

I want 3 & 4 to be done in one step IF this is the first use of TE and the mask has not been setup manually. Having 3 & 4 as it is today is a pain and a plain wrong UX to me.

EDIT: And moreover I'd like to have the mask to be spread out the whole exposure range (again if no manual adjustments have been done).

@MStraeten
Copy link
Collaborator

  • Do one scroll - the mask is setup
  • Do another scroll - the adjustment is done

maybe i just have luck with my apple magic mouse - so these two scroll "clicks" feels like simply one slide with the finger - it's harder to do just step 3 ;)

@elstoc
Copy link
Contributor

elstoc commented Dec 19, 2025

@TurboGit Again you're just saying "the mask is setup" as if it is obvious that there's only one way to do this. Mask spread over the histogram is not the only way to do it, and IMO is often not important at all (I often just want to adjust the highlights). We also have to deal with the actual structure of the mask (contrast, edge detection etc.). So that simple phrase is doing a lot of heavy lifting and will be controversial unless we're really sure what we want and that it's achieving the right thing.

@TurboGit
Copy link
Member

@elstoc : Seems like we do not understand each other, or you are not reading all I said.

I said: "adjust the mask if not already manually done". That is, in you case of you want to adjust the mask yourself manually, fine. I do that also from time to time. But in many cases - for me - just using the default mask + clicking on the two auto-adjustment to spread out the histogram it works fine. That's why I think an initial auto-mask-histogram setup may well be a good start for most people in many cases. And again, if you want to adjust the mask-histogram, fine as you already do that it won't be reset. So it seems to me that your workflow is preserved, no?

@tpinfold
Copy link

"I think an initial auto-mask-histogram setup may well be a good start for most people in many cases." I would agree with this if it can be implemented.

@elstoc
Copy link
Contributor

elstoc commented Dec 19, 2025

So it seems to me that your workflow is preserved, no?

Well since, as I say, I don't like to use the contrast adjustment slider, it feels like I would have to manually undo that if it's going to be auto-activated when activating the module. Perhaps I'm not understanding when you think this automatic histogram setting will happen. Because I either want it not to happen at all, or to not adjust the contrast. Perhaps a single button to adjust both (similar to how we have in filmic) would be enough? Bear in mind not everyone uses scrolling over the image to adjust

@jade-nl
Copy link

jade-nl commented Dec 19, 2025

Let's hope there's a misunderstanding because I'm 100% with @elstoc:

Turning on the tone-equalizer for the first time should not do anything by itself.

@TurboGit
Copy link
Member

Turning on the tone-equalizer for the first time should not do anything by itself.

So do not use it as it is already what is done on current master.

@elstoc
Copy link
Contributor

elstoc commented Dec 19, 2025

So do not use it as it is already what is done on current master

Definitely some language barrier issues here because I don't know what you mean by this

@TurboGit
Copy link
Member

I have already described what TE does here #18656 (comment)

To work it needs an histogram and this is computed at the second mouse scroll. Saying that it must done NOTHING to work is plain wrong. TE needs an histogram to work and this must be computed/setup somehow.

@jade-nl
Copy link

jade-nl commented Dec 19, 2025

Turning on the tone-equalizer for the first time should not do anything by itself.

So do not use it as it is already what is done on current master.

Mmm... not the kind of reaction/reply I expect here, but OK.

If I turn on the tone-equalizer an initial mask is computed based on the starting values. These starting values are all 0.00. That is your starting point and, yes, that'll be wrong 99% of the time. This is the point you start dialling in what you want/need.

If I understand correctly you want to make the tone-equalizer auto-compute the "correct" exposure and contrast mask settings. If that is what you want then both Elstoc and I are against this (@elstoc correct me if I'm wrong).

If this new TE is going to be merged I don't mind if there's going to be an Auto button that changes a bunch of settings, but that should not be an automation that is activated when first starting this module.

I hope I made clear what I mean and hope that I understood correctly what you wanted to communicate.

EDIT: I'm also going to be stepping away from this conversation for a while. My replies make my point of view rather clear and I'm not willing to get worked up about all this.

@elstoc
Copy link
Contributor

elstoc commented Dec 19, 2025

Ok, so I think I see the issue here. @TurboGit you're just saying it should work exactly as it does at the moment but just so that you don't need to do two scrolls to get started adjusting the curve (I assumed you meant auto-spreading the mask as well)? I think this needs two steps because the module needs to be both expanded and enabled for it to start working (I guess it won't process the incoming pipeline data until it's enabled?), and the first scroll causes the module to switch on?

@TurboGit
Copy link
Member

@elstoc : Yes, exactly that. Communication is sometime very hard !

@TurboGit
Copy link
Member

@jade-nl @elstoc : Now that my idea is understood, if you test this PR you'll see that this is fixed (yes fixed as the current master UX is to me broken). So at least, for a non controversial PR this could be integrated.

@elstoc
Copy link
Contributor

elstoc commented Dec 19, 2025

Happy to test/review that as a standalone PR but I'd like to see what code is actually achieving this alongside the testing and it's hard with all the other stuff going on here. This is also the approach I'd recommend for getting other features merged - make them as small as possible and do them one-at-a-time. Part of the problem with this PR has just been the number of simultaneous changes going on (and hence simultaneous/overlapping discussions about them)

@TurboGit
Copy link
Member

@elstoc : I agree, let start back with small incremental PR if this is ok with @marc-fouquet too.

@wpferguson
Copy link
Member

This is a question of mild discomfort for people who know the TE well vs. a big help for people who don't. There is the comment by @SoupyGit above - exactly that is the intention. The average user probably knows much less about DT than the people who are participating in this discussion.

@jenshannoschwalm, @marc-fouquet I'm looking at this from a support point of view. The supposition is that the user's will read the manual and figure out what is going wrong, if anything. But, we know the user's wont read the manual. They will raise an issue asking why it's happening, why it's happening if nothing is wrong, what does it mean, etc, etc, etc... And, you know the user's will check the "I've checked..." box because they don't want to bother or they don't know how, or they can''t figure out the correct term to search for. And, if we tell them nothing is wrong....

@paperdigits
Copy link
Collaborator

paperdigits commented Dec 19, 2025

This is way too much UI change to have in one chunk without some conceptualization of what should be changed; its a solution in search of a problem.

Some issues:

  • where else do we have tabs below?
  • Way too much stuff hidden behind combo box-style drop downs. It is extremely annoying to adjust the mask now, that is too many clicks
  • "Global exposure" - redundant and unnecessary, just put an exposure module after TE if that's what you want.
  • hidden toggle to change from 2 stops to 4 stops is practically undiscoverable and is confusing
  • Alignment tab: why isn't this just integrated under exposure of where it used to be? Seem needless
  • I'd expect the sliders to move when I click "fully align" or "alignment exposure" but its not clear what is happening or what I'm supposed to do

I'd not want any of these UI changes.

@jenshannoschwalm
Copy link
Collaborator

@wpferguson it seems i disagree here :-) Also from a support point - people will notice the warning and will start to think (hopefully) BTW we have such visual warnings at more places. Just one example - the agx curve. Why do we "warn" about "non-sigmoid" ? Also the current TE has such visual warnings and that's very good to me.

BTW i simply don't get why there is so much "heat" in the discussion. Yes - this PR is almost impossible to follow and i stopped a while ago trying to do so. And I think some initial ideas are simply not good but ...

About the curve. There is certainly a (minor) problem with current TE at harsh transitions. AP tried to mitigate as good as possible and it's already very good. But - the way we do the curve is related to those problems (at least that's how i remember from my last reading the code in detail many months ago). So - it seems very worthwhile to investigate that. I also remember the time i was fighting with fixing issues with color equalizer brightness and saturation using the available curve. Undershoots are very bad for luminance correction. So i was very surprised to read such strong opinions and i am absolutely not sure those comments are based on code reading, maths or whatever. CE mitigates those transition problems very well - at least i think so - and if marc starts a special PR about that i would certainly be looking into that in detail and have some ideas about that.

Don't misunderstand me, i am totally fine with the TE module as is and would personally not want to spend time and efforts on it. But i fully appreciate @marc-fouquet's work on this and am very interested what parts of that will land in master as a definitive improvement.

@wpferguson
Copy link
Member

wpferguson commented Dec 19, 2025

i simply don't get why there is so much "heat" in the discussion

Because this is supposed to REPLACE the existing tone equalizer which isn't broken, but will be for a lot of users, including me, if it does get merged. As some above said it's a solution in search of a problem. If it's merged in it's current form it will BREAK my workflow and then I will spend days/hours/weeks trying to work around the "fixed" tone equalizer.

If it would get merged as a separate module (tone mechanic or tone toolbox or whatever) then I'm pretty sure there would not be much of a discussion.

So far @TurboGit is the only one that can demonstrate a reproducible problem that should be fixed, but that's not in the PR. The things in the PR that are "fixed" I don't see/can't reproduce and I use tone equalizer a lot (multiple instances in each image, 200+ images/week).

EDIT: My thought is that this can be merged as a separate module and then those who want it will be happy and those who don't will be happy. Or, someone can be unhappy.

@tpinfold
Copy link

Apologies, if I am just adding noise to an already noisy conversation. I don't see TE as broken and therefore don't see much need for change. However, what first bugged me was having to switch between the masking and advance tab to set the exposure and contrast compensation sliders. That has now been resolved by placing those sliders in the advance tab. Thanks for that small improvement.

The next issue I have with the TE module is the autopickers for the exposure and contrast compensation sliders fail in my view to place the histogram correctly. I would find the TE module quicker to use if the autopickers either worked correctly or were automatically applied upon the computation of the histogram.

Here is an example of the histogram after multiple attempts of correction with the auto pickers. It seems clipped to me on the left hand side. I presume this is undesirable. Am I wrong?
image

@SoupyGit
Copy link

"The supposition is that the user's will read the manual and figure out what is going wrong, if anything."

This is why I suggested putting the meaning of the warning in the tool tip. If users see the red line and wonder "what does that mean?", they only need to read the tool tip to find out.

Someone above suggested we don't need the warnings as we can see the effect with our own eyes. But some of the gradient issues I noticed were only obvious at 100%, and I was making TE adjustments zoomed out, therefore missed them. Coloured warnings would help with this. Also, a module like filmic gives coloured warnings for when the curve inverses, which is helpful and I've never seen anyone complain about. So if it's done one place, why not another?

@MStraeten
Copy link
Collaborator

It seems clipped to me on the left hand side. I presume this is undesirable. Am I wrong?

if the mask matches your expectations then it's not undesirable - did you check the generated mask? Keep in mind, it's not about best fitting of a mask histogram but of generating a mask that supports your editing intention, meaning proper separation of the tonal ranges you want to edit differently.

@marc-fouquet
Copy link
Contributor Author

Just closing this and leaving the project. Too tired of fighting.

@TurboGit
Copy link
Member

TurboGit commented Jan 9, 2026

@marc-fouquet : That's your decision, but I'm sad as there was some really nice enhancement in there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

controversial this raises concerns, don't move on the technical work before reaching a consensus documentation-pending a documentation work is required feature: redesign current features to rewrite priority: medium core features are degraded in a way that is still mostly usable, software stutters release notes: pending scope: image processing correcting pixels scope: UI user interface and interactions

Projects

None yet

Development

Successfully merging this pull request may close these issues.