Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to use p-tuning v2 during inference without causing any impact on the backbone model's performance? #65

Open
JuhaoLiang1997 opened this issue Oct 9, 2023 · 0 comments

Comments

@JuhaoLiang1997
Copy link

Hi,

I've observed that when employing p-tuning v2 for inference with all 0 prefix parameters, it impacts the behavior of the original model. I'm contemplating the feasibility of incorporating a prefix prompt without any impact on the original model's behavior. I'm uncertain about whether my experiment has any issues. Your input on this matter would be greatly appreciated. Thank you.

past_key_values = tuple([torch.zeros_like(pkv, dtype=pkv.dtype) for pkv in past_key_values])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant