Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would you provide more information about SkyMath? #8

Open
yucc-leon opened this issue Oct 31, 2023 · 2 comments
Open

Would you provide more information about SkyMath? #8

yucc-leon opened this issue Oct 31, 2023 · 2 comments

Comments

@yucc-leon
Copy link

Your paper suggested Instruction Boosting and Self-Compare FT would be very helpful but IB looks like Wizard-Evol and IB is very similar to PHP and according to the tech report, I cannot tell what are the differences between them.

@TianwenWei
Copy link
Contributor

The SkyMath method mainly consists of two parts: instruction boosting and self-compare.
1 Instruction boosting primarily draws inspiration from wizardLM and MetaMath. We integrate and improve their methods to enhance instructions, as described in the paper.
2 Self-compare is inspired by PHP, but there are significant differences between them. PHP mainly uses progressive hints in reasoning process, while self-compare emphasizes that the LLM compares previous answers with standard solutions during training.

@yucc-leon
Copy link
Author

Thanks for explaining but it's just a paraphrase version of the section in your paper.
MetaMath opensourced their example data and used prompt so people can easily verify and reproduce their work. WizardLM also gave their code and final dataset. And your work surppassed both, so I was wondering if more details can be shared to help others find out what really matters in making up such datasets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants