Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does the Normalizing Flow loss in SRFlow behave? #12

Open
avinash31d opened this issue Jan 22, 2021 · 2 comments
Open

How does the Normalizing Flow loss in SRFlow behave? #12

avinash31d opened this issue Jan 22, 2021 · 2 comments

Comments

@avinash31d
Copy link

avinash31d commented Jan 22, 2021

I followed this repo, paper and glow paper. My loss looks like this

    _logp =  GaussianDiag.logp(None, None, (z))
    obj = logdets + logp
    loss = -obj / (tf.math.log(2.) * tf.cast(pixels, tf.float32))_

There are 2 terms added together : one is logp which is always negative, and logdets which starts with a negative number during training, because of the minus sign in -obj the loss is positive, but after training for around 100k steps, both the logp and logdets increases. The logp term increases and becomes positive number and hence the final loss becomes negative around -3.xxx. Just wanted to know whether it is an expected behaviour ?

@andreas128
Copy link
Owner

The final loss of SRFlow is negative and steadily decreasing:

PyTorch Normalizing Flow Loss

We did not track those two components separately. Please try to run our code and log the loss (NLL is defined here).

Did that help?

@andreas128 andreas128 changed the title Negative loss? is that a problem? How does the Normalizing Flow loss in SRFlow behave? Jan 23, 2021
@avinash31d
Copy link
Author

Hi @andreas128,

Thanks, that is something I was looking for and it clears my doubt about the logdets term.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants