-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Noisy outputs when running LJSpeech checkpoint on Tacotron mel spectrograms #4
Comments
@Rongjiehuang it would be great if you could take a look :-) |
@patrickvonplaten Hi, the demo code has been updated in egs/, which I hope could be helpful. Besides, I find that this noisy output is due to the mel-preprocessing mismatch between the acoustic and vocoder models. And thus the mel output from this tacotron2 could not be properly vocoded. For text-to-speech synthesis, PortaSpeech using this implementation + FastDiff is more likely to generate reasonable results. |
To vocode spectrograms generated from Tacotron, we need to retrain the FastDiff model with spectrograms which are processed in the same way. |
Hey @Rongjiehuang, Thanks for answering so quickly here! It seems like in the example it is only shown how to vocode mel spectrograms that were derived from the source truth (i.e. the sound itself) - do you think it could be possible to also include an example of what text-to-mel spectrogram model should be used so that the user could see how well FastDiff performs on Text-to-Speech? |
Hi, the TTS example has been included, please refer to https://github.com/Rongjiehuang/FastDiff#inference-for-text-to-speech-synthesis |
@Rongjiehuang |
Ig they mean to say that the pre-processing should be same for both of them. I am currently working on a similar task and I'm possibly lost in the pre-processing steps of 2 different model repositories. Is there any sort of documentation for the requirements of preprocessing steps? |
@peyyman e.g., the normalizing stage in preprocessing. |
Is it possible to train fastdiff with mel files (instead of wav files) as input using this code base? I mean is the required changes to code huge or with some tweaking it can be done? |
Sorry for the late reply. I have been working on dealing with this in the past few days. The LJSpeech checkpoint for neural vocoding of tacotron2 output and the corresponding script has been provided. Please refer to https://github.com/Rongjiehuang/FastDiff/#using-tacotron. If you want to train FastDiff(Tacotron) by yourself, use this config: |
Hey @Rongjiehuang,
Thanks a lot for open-sourcing the checkpoint for the FastDiff vocoder for LJSpeech!
I played around with the code a bit and I'm only getting quite noisy generations when decoding the mel spectrogram of a tacotron with FastDiff's vocoder.
Here the code to reproduce:
After listening to
test.wav
one can identify the correct sentence but the output is extremely noisy. Any ideas what the reason for this could be? Are any of the hyper-parameters incorrectly set? Or does FastDiff only work with a certain type of Mel-spectrograms?It would be very nice if you could take a quick look to check whether I have messed up some part of the code 😅
The text was updated successfully, but these errors were encountered: