Skip to content

Comments

Fix three bugs in vocoder loss computation#797

Open
Mr-Neutr0n wants to merge 1 commit intomozilla:masterfrom
Mr-Neutr0n:fix/vocoder-loss-bugs
Open

Fix three bugs in vocoder loss computation#797
Mr-Neutr0n wants to merge 1 commit intomozilla:masterfrom
Mr-Neutr0n:fix/vocoder-loss-bugs

Conversation

@Mr-Neutr0n
Copy link

Summary

Fixes three bugs in TTS/vocoder/layers/losses.py that cause incorrect loss values during vocoder training:

  • Boolean logic error in GeneratorLoss.forward(): The condition if self.use_hinge_gan_loss and not scores_fake is not None evaluates as (not scores_fake) is not None due to Python operator precedence, which is always True. This caused the hinge loss block to execute even when scores_fake was None, leading to a runtime error. Fixed to and scores_fake is not None. Applied the same fix to the feat_match_loss guard which had an analogous issue (and not feats_fake: is falsy for empty lists).

  • Variable shadowing in _apply_D_loss(): The loop unpacking total_loss, real_loss, fake_loss = loss_func(...) shadows the accumulator variables real_loss and fake_loss initialized above. As a result, real_loss += real_loss just doubles the current iteration's value instead of accumulating across discriminator scales. Renamed the unpacked variables to cur_real_loss and cur_fake_loss.

  • Feature loss normalization in MelganFeatureLoss.forward(): loss_feats /= len(fake_feats) + len(real_feats) divides by 2N instead of N, since both feature lists have the same length. The loss is summed over N pairs, so the denominator should be N. Fixed to loss_feats /= len(fake_feats).

Test plan

  • Verify hinge GAN loss is correctly applied when use_hinge_gan_loss=True and scores_fake is provided
  • Verify feature matching loss is correctly applied when use_feat_match_loss=True and feats_fake is provided
  • Verify _apply_D_loss correctly accumulates real_loss and fake_loss across multiple discriminator scales
  • Verify MelganFeatureLoss returns correctly normalized loss values

1. Boolean logic error in GeneratorLoss.forward(): the condition
   `if self.use_hinge_gan_loss and not scores_fake is not None`
   evaluates as `(not scores_fake) is not None` due to operator
   precedence, which is always True. This caused the hinge loss
   block to execute even when scores_fake was None. Fixed to
   `and scores_fake is not None`. Applied same fix to the
   feat_match_loss guard which had an analogous issue.

2. Variable shadowing in _apply_D_loss(): the loop unpacking
   `total_loss, real_loss, fake_loss = loss_func(...)` shadows
   the accumulator variables real_loss and fake_loss, so
   `real_loss += real_loss` just doubles the current iteration's
   value instead of accumulating across scales. Renamed the
   unpacked variables to cur_real_loss and cur_fake_loss.

3. Feature loss normalization in MelganFeatureLoss.forward():
   `loss_feats /= len(fake_feats) + len(real_feats)` divides by
   2N instead of N since both lists have the same length. The
   loss is summed over N pairs, so the denominator should be N.
   Fixed to `loss_feats /= len(fake_feats)`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant