Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot find 'config.yaml' and 'opensora_config.py' #1

Open
zjq0455 opened this issue Jul 4, 2024 · 2 comments
Open

Cannot find 'config.yaml' and 'opensora_config.py' #1

zjq0455 opened this issue Jul 4, 2024 · 2 comments

Comments

@zjq0455
Copy link

zjq0455 commented Jul 4, 2024

Hi,
I faced the issue when I tried to run 6.1 normal inference and 6.2 inference with mixed precision as your indications. But something was wrong:

For 6.1 normal inference:
(viditq) root@485fa2842494:~/ViDiT-Q# CUDA_VISIBLE_DEVICES=6 python t2v/scripts/quant_txt2video.py ./t2v/configs/quant/W8A8/16x512x512.py --outdir ./logs/appendix/w8a8_ptq --ckpt_path ./logs/split_ckpt/OpenSora-v1-HQ-16x512x512-split-test.pth --dataset_type opensora --part_fp --save_dir w8a8_samples --precompute_text_embeds ./t2v/utils_files/text_embeds.pth
/root/anaconda3/envs/viditq/lib/python3.10/site-packages/colossalai/shardformer/layer/normalization.py:45: UserWarning: Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel
warnings.warn("Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel")
Config (path: ./t2v/configs/quant/W8A8/16x512x512.py): {'num_frames': 16, 'fps': 8, 'image_size': (512, 512), 'model': {'type': 'STDiT-XL/2', 'space_scale': 1.0, 'time_scale': 1.0, 'enable_flashattn': True, 'enable_layernorm_kernel': False, 'from_pretrained': './logs/split_ckpt/OpenSora-v1-HQ-16x512x512-split-test.pth'}, 'vae': {'type': 'VideoAutoencoderKL', 'from_pretrained': 'stabilityai/sd-vae-ft-ema', 'micro_batch_size': 128}, 'text_encoder': {'type': 't5', 'from_pretrained': 'DeepFloyd/t5-v1_1-xxl', 'model_max_length': 120}, 'scheduler': {'type': 'iddpm', 'num_sampling_steps': 20, 'cfg_scale': 7.0}, 'dtype': 'fp16', 'batch_size': 1, 'seed': 42, 'prompt_path': './t2v/assets/texts/t2v_samples.txt', 'multi_resolution': False, 'config': './t2v/configs/quant/W8A8/16x512x512.py', 'ckpt_path': './logs/split_ckpt/OpenSora-v1-HQ-16x512x512-split-test.pth', 'gpu': '0', 'outdir': './logs/appendix/w8a8_ptq', 'precompute_text_embeds': './t2v/utils_files/text_embeds.pth', 'save_dir': 'w8a8_samples_opensora', 'sampler': 'ddim', 'dataset_type': 'opensora', 'part_quant': False, 'skip_quant_weight': False, 'skip_quant_act': False, 'num_videos': 100, 'layer_wise_quant': False, 'group_wise_quant': False, 'timestep_wise_quant': False, 'block_group_wise_quant': False, 'quant_ratio': 1.0, 'part_fp': True, 'fp_ratio': 1.0, 'timestep_wise_mp': False, 'weight_mp': False, 'act_mp': False, 'block_wise_quant_progressively': False, 'block_wise_quant': False}
Traceback (most recent call last):
File "/root/ViDiT-Q/t2v/scripts/quant_txt2video.py", line 235, in
main()
File "/root/ViDiT-Q/t2v/scripts/quant_txt2video.py", line 51, in main
config = OmegaConf.load(f"{opt.ptq_config}")
File "/root/anaconda3/envs/viditq/lib/python3.10/site-packages/omegaconf/omegaconf.py", line 183, in load
with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/root/ViDiT-Q/logs/appendix/w8a8_ptq/config.yaml'

For 6.2 inference with mixed precision:
(viditq) root@485fa2842494:~/ViDiT-Q# CUDA_VISIBLE_DEVICES=6 python t2v/scripts/quant_txt2video_mp.py ./logs/W4A8_Naive_Smooth_ptq/opensora_config.py --outdir ./logs/W4A8_Naive_Smooth_ptq --ckpt_path ./logs/split_ckpt/OpenSora-v1-HQ-16x512x512-split-test.pth --dataset_type opensora --part_fp --timestep_wise_mp --time_mp_config_weight ./t2v/configs/quant/W4A8_Naive_Smooth/t20_weight_4_mp.yaml --time_mp_config_act ./t2v/configs/quant/W4A8_Naive_Smooth/t20_act_8_mp.yaml --save_dir W4A8_Naive_Smooth_samples --precompute_text_embeds ./t2v/utils_files/text_embeds.pth
/root/anaconda3/envs/viditq/lib/python3.10/site-packages/colossalai/shardformer/layer/normalization.py:45: UserWarning: Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel
warnings.warn("Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel")
Traceback (most recent call last):
File "/root/ViDiT-Q/t2v/scripts/quant_txt2video_mp.py", line 555, in
main()
File "/root/ViDiT-Q/t2v/scripts/quant_txt2video_mp.py", line 192, in main
cfg = parse_configs(training=False, mode="quant_inference")
File "/root/ViDiT-Q/t2v/opensora/utils/config_utils.py", line 261, in parse_configs
cfg = Config.fromfile(args.config)
File "/root/anaconda3/envs/viditq/lib/python3.10/site-packages/mmengine/config/config.py", line 460, in fromfile
lazy_import is None and not Config._is_lazy_import(filename):
File "/root/anaconda3/envs/viditq/lib/python3.10/site-packages/mmengine/config/config.py", line 1661, in _is_lazy_import
with open(filename, encoding='utf-8') as f:
FileNotFoundError: [Errno 2] No such file or directory: './logs/W4A8_Naive_Smooth_ptq/opensora_config.py'

Whether any file is missing?

@stein-666
Copy link
Collaborator

stein-666 commented Jul 11, 2024

Thanks for your interest in our work. To generate videos, please first execute the “5. Post Training Quantization (PTQ) Process”. This process will create a directory named “logs” that will contain some essential files. Once this step is completed, you can proceed with “6. Inference Quantized Model”.

For instance, if you wish to perform W4A8 inference, you can execute the following commands in your terminal:
bash t2v/shell_scripts/W4A8_Naive_Smooth/ptq_w4a8.sh $gpu_id
bash t2v/shell_scripts/W4A8_Naive_Smooth/inference_w4a8.sh $gpu_id

Please note that the calibration data should be generated prior to all the above operations. You can do this by running:
bash t2v/shell_scripts/get_calib_data.sh $gpu_id

@zjq0455
Copy link
Author

zjq0455 commented Jul 15, 2024

Thanks for your interest in our work. To generate videos, please first execute the “5. Post Training Quantization (PTQ) Process”. This process will create a directory named “logs” that will contain some essential files. Once this step is completed, you can proceed with “6. Inference Quantized Model”.

For instance, if you wish to perform W4A8 inference, you can execute the following commands in your terminal: bash t2v/shell_scripts/W4A8_Naive_Smooth/ptq_w4a8.sh $gpu_id bash t2v/shell_scripts/W4A8_Naive_Smooth/inference_w4a8.sh $gpu_id

Please note that the calibration data should be generated prior to all the above operations. You can do this by running: bash t2v/shell_scripts/get_calib_data.sh $gpu_id

Thank you for reply! The above problems have been solved. Additionally, in 5.1 and 5.2, the command "CKPT_PATH="/logs/split_ckpt/OpenSora-v1-HQ-16x512x512-split.pth" should be modified to "CKPT_PATH="./logs/split_ckpt/OpenSora-v1-HQ-16x512x512-split.pth". A "." is missed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants