v2.2.3
New Features:
- support bnb and ollama export
- suport Q-Galore
New Models:
- numina-math-7b
Bug Fix:
- fix vllm>=0.5.1, TP
- fix internvl2 template
- fix glm4v merge-lora
What's Changed
- fix internvl doc by @hjh0119 in #1394
- Fix link by @Jintao-Huang in #1397
- fix vllm==0.5.1 by @Jintao-Huang in #1404
- [TorchAcc] update accelerate API and add llama3-70B by @baoleai in #1400
- Support Ollama and BNB for export by @tastelikefeet in #1407
- Fix glm4v merge lora by @Jintao-Huang in #1410
- [TorchAcc] fix model download when using TorchAcc distributed training by @baoleai in #1408
- Support padding left by @tastelikefeet in #1414
- Fix ollama export by @tastelikefeet in #1416
- fix web-ui params by @tastelikefeet in #1417
- fix hub_token by @Jintao-Huang in #1420
- Update ms hub token by @Jintao-Huang in #1424
- Add numina math model by @tastelikefeet in #1421
- fix internvl template by @Jintao-Huang in #1433
- Internvl series models update by @hjh0119 in #1426
- fix internvl2 template by @Jintao-Huang in #1436
- Fix bug and make lazydataset more stable by @tastelikefeet in #1438
- Fix llava-hf by @tastelikefeet in #1439
- [WIP]Support Q-Galore by @tastelikefeet in #1440
-
- support deepspeed on ui 2. add tools to client_utils by @tastelikefeet in #1446
- fix read csv (float) by @Jintao-Huang in #1447
- fix dataset by @tastelikefeet in #1448
- update internvl doc by @hjh0119 in #1449
Full Changelog: v2.2.2...v2.2.3