Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Can not get result images to keep input image name/preview img bugged #16481

Open
4 of 6 tasks
nexus382 opened this issue Sep 12, 2024 · 0 comments
Open
4 of 6 tasks
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@nexus382
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

So, i am not sure if i am just missing a setting, or what, but while using the "Extras menu" when i am batch upscaling, The images have a different name then the input images. This means hours of renaming files... if this DOES NOT exist, is it something maybe that could be added? I would be suprised such a small feature wouldnt have been incorperated, which is why i think im doing something wrong... or that there is a bug?

then the second issue, which is deff a bug, if you reuse the seed from a prior generation the preview img wont update.

so defff 1 bug, if not 2. I do texture packs, so the file names i put IN to Forge, are very important that they spit out the same name.. even if it had a pre or suffix, i could bulk rename to remove thoes..

Steps to reproduce the problem

  1. img2img, ste to -1 seed,
  2. Generate.
    3 copy seed from file, and use it in UI for next imgs
  3. every img you generate with this seed until yuo close and restart, will show the same img in preview on web UI, but in folder the img looks right.

What should have happened?

i should see the new generated img

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2024-09-12-01-26.json

Console logs

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-421-g59dd981f
Commit hash: 59dd981fa78b767a9973a8cd1d555e3cb851c62b
Launching Web UI with arguments:
Total VRAM 8192 MB, total RAM 32530 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2070 : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.float32] -> torch.float32
CUDA Using Stream: False
F:\Raw Data Set Imgs\webui_forge_cu121_torch231\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: F:\Raw Data Set Imgs\webui_forge_cu121_torch231\webui\models\ControlNetPreprocessor
2024-09-11 19:41:49,362 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\Stable-diffusion\\cyberrealistic25D_v10.safetensors', 'hash': '2052b672'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 53.3s (prepare environment: 13.2s, import torch: 22.0s, initialize shared: 0.4s, other imports: 2.5s, list SD models: 0.1s, load scripts: 5.7s, create ui: 6.7s, gradio launch: 3.0s).
Environment vars changed: {'stream': False, 'inference_memory': 2259.0, 'pin_shared_memory': False}
[GPU Setting] You will use 72.42% GPU memory (5932.00 MB) to load weights, and use 27.58% GPU memory (2259.00 MB) to do matrix computation.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
WARNING:modules.modelloader:Model 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\RealESRGAN\\realesr-general-wdn-x4v3.pth' is not a 'ESRGAN' model (got 'RealESRGAN Compact')
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
WARNING:modules.modelloader:Model 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\RealESRGAN\\realesr-general-wdn-x4v3.pth' is not a 'ESRGAN' model (got 'RealESRGAN Compact')
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
WARNING:modules.modelloader:Model 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\RealESRGAN\\realesr-general-x4v3.pth' is not a 'ESRGAN' model (got 'RealESRGAN Compact')
Environment vars changed: {'stream': False, 'inference_memory': 2259.0, 'pin_shared_memory': False}
[GPU Setting] You will use 72.42% GPU memory (5932.00 MB) to load weights, and use 27.58% GPU memory (2259.00 MB) to do matrix computation.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
WARNING:modules.modelloader:Model 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\RealESRGAN\\realesr-general-x4v3.pth' is not a 'ESRGAN' model (got 'RealESRGAN Compact')
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
WARNING:modules.modelloader:Model 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\RealESRGAN\\realesr-general-x4v3.pth' is not a 'ESRGAN' model (got 'RealESRGAN Compact')
[Unload] Trying to free 2259.00 MB for cuda:0 with 0 models keep loaded ...
Cleanup minimal inference memory.
Loading Model: {'checkpoint_info': {'filename': 'F:\\Raw Data Set Imgs\\webui_forge_cu121_torch231\\webui\\models\\Stable-diffusion\\cyberrealistic25D_v10.safetensors', 'hash': '2052b672'}, 'additional_modules': [], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ...
StateDict Keys: {'unet': 686, 'vae': 248, 'text_encoder': 197, 'ignore': 0}
F:\Raw Data Set Imgs\webui_forge_cu121_torch231\system\python\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.float16}
Model loaded in 21.4s (unload existing model: 0.2s, forge model load: 21.1s).
To load target model IntegratedAutoencoderKL
Begin to load 1 model
[Unload] Trying to free 2673.85 MB for cuda:0 with 0 models keep loaded ...
[Memory Management] Current Free GPU Memory: 7091.00 MB
[Memory Management] Required Model Memory: 319.11 MB
[Memory Management] Required Inference Memory: 2259.00 MB
[Memory Management] Estimated Remaining GPU Memory: 4512.89 MB
Moving model(s) has taken 0.11 seconds
To load target model JointTextEncoder
Begin to load 1 model
[Unload] Trying to free 2564.14 MB for cuda:0 with 0 models keep loaded ...
[Unload] Current free memory is 6780.59 MB ...
[Memory Management] Current Free GPU Memory: 6780.59 MB
[Memory Management] Required Model Memory: 234.72 MB
[Memory Management] Required Inference Memory: 2259.00 MB
[Memory Management] Estimated Remaining GPU Memory: 4286.87 MB
Moving model(s) has taken 0.08 seconds
[Unload] Trying to free 2259.00 MB for cuda:0 with 1 models keep loaded ...
[Unload] Current free memory is 6436.31 MB ...
token_merging_ratio = 0.5
To load target model KModel
Begin to load 1 model
[Unload] Trying to free 4390.23 MB for cuda:0 with 0 models keep loaded ...
[Unload] Current free memory is 6436.02 MB ...
[Memory Management] Current Free GPU Memory: 6436.02 MB
[Memory Management] Required Model Memory: 1639.41 MB
[Memory Management] Required Inference Memory: 2259.00 MB
[Memory Management] Estimated Remaining GPU Memory: 2537.61 MB
Moving model(s) has taken 0.56 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00,  3.97it/s]
[Unload] Trying to free 2259.00 MB for cuda:0 with 1 models keep loaded ...████████████| 20/20 [00:04<00:00,  4.72it/s]

Additional information

No response

@nexus382 nexus382 added the bug-report Report of a bug, yet to be confirmed label Sep 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

1 participant