Commit Graph

267 Commits

Author SHA1 Message Date
BiologicalExplosion 2222cf67fd
MLU memory optimization (#7470)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
BVH 301e26b131
Add option to store TE in bf16 (#7461) 2025-04-01 13:48:53 -04:00
comfyanonymous 8edc1f44c1 Support more float8 types. 2025-03-25 05:23:49 -04:00
FeepingCreature 7aceb9f91c
Add --use-flash-attention flag. (#7223)
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous 35504e2f93 Fix. 2025-03-13 15:03:18 -04:00
comfyanonymous 299436cfed Print mac version. 2025-03-13 10:05:40 -04:00
comfyanonymous 0952569493 Fix stable cascade VAE on some lowvram machines. 2025-03-08 20:24:04 -05:00
Chenlei Hu 4d55f16ae8
Use enum list for --fast options (#7024) 2025-03-01 02:37:35 -05:00
comfyanonymous cf0b549d48 --fast now takes a number as argument to indicate how fast you want it.
The idea is that you can indicate how much quality vs speed you want.

At the moment:

--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.

--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous eb4543474b Use fp16 for intermediate for fp8 weights with --fast if supported. 2025-02-28 02:17:50 -05:00
comfyanonymous 1804397952 Use fp16 if checkpoint weights are fp16 and the model supports it. 2025-02-27 16:39:57 -05:00
BiologicalExplosion 89253e9fe5
Support Cambricon MLU (#6964)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
comfyanonymous 96d891cb94 Speedup on some models by not upcasting bfloat16 to float32 on mac. 2025-02-24 05:41:32 -05:00
comfyanonymous ace899e71a Prioritize fp16 compute when using allow_fp16_accumulation 2025-02-23 04:45:54 -05:00
comfyanonymous 072db3bea6 Assume the mac black image bug won't be fixed before v16. 2025-02-21 20:24:07 -05:00
comfyanonymous a6deca6d9a Latest mac still has the black image bug. 2025-02-21 20:14:30 -05:00
comfyanonymous 41c30e92e7 Let all model memory be offloaded on nvidia. 2025-02-21 06:32:21 -05:00
comfyanonymous 12da6ef581 Apparently directml supports fp16. 2025-02-20 09:30:24 -05:00
comfyanonymous b07258cef2 Fix typo.
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
comfyanonymous 31e54b7052 Improve AMD arch detection. 2025-02-17 04:53:40 -05:00
comfyanonymous 8c0bae50c3 bf16 manual cast works on old AMD. 2025-02-17 04:42:40 -05:00
comfyanonymous 530412cb9d Refactor torch version checks to be more future proof. 2025-02-17 04:36:45 -05:00
comfyanonymous e2919d38b4 Disable bf16 on AMD GPUs that don't support it. 2025-02-16 05:46:10 -05:00
comfyanonymous 1cd6cd6080 Disable pytorch attention in VAE for AMD. 2025-02-14 05:42:14 -05:00
comfyanonymous d7b4bf21a2 Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
comfyanonymous 8773ccf74d Better memory estimation for ROCm that support mem efficient attention.
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
comfyanonymous 1d5d6586f3 Fix ruff. 2025-02-12 06:49:16 -05:00
zhoufan2956 35740259de
mix_ascend_bf16_infer_err (#6794) 2025-02-12 06:48:11 -05:00
HishamC b124256817
Fix for running via DirectML (#6542)
* Fix for running via DirectML

Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs

* fix formating

* update casual mask calculation
2025-02-11 17:11:32 -05:00
comfyanonymous af4b7c91be Make --force-fp16 actually force the diffusion model to be fp16. 2025-02-11 08:33:09 -05:00
catboxanon 43a74c0de1
Allow FP16 accumulation with `--fast` (#6453)
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
comfyanonymous 255edf2246 Lower minimum ratio of loaded weights on Nvidia. 2025-01-27 05:26:51 -05:00
comfyanonymous 67feb05299 Remove redundant code. 2025-01-25 19:04:53 -05:00
comfyanonymous d45ebb63f6 Remove old unused function. 2025-01-04 07:20:54 -05:00
comfyanonymous 9e9c8a1c64 Clear cache as often on AMD as Nvidia.
I think the issue this was working around has been solved.

If you notice that this change slows things down or causes stutters on
your AMD GPU with ROCm on Linux please report it.
2025-01-02 08:44:16 -05:00
comfyanonymous 160ca08138 Use python 3.9 in launch test instead of 3.8
Fix ruff check.
2024-12-26 20:05:54 -05:00
Huazhong Ji c4bfdba330
Support ascend npu (#5436)
* support ascend npu

Co-authored-by: YukMingLaw <lymmm2@163.com>
Co-authored-by: starmountain1997 <guozr1997@hotmail.com>
Co-authored-by: Ginray <ginray0215@gmail.com>
2024-12-26 19:36:50 -05:00
comfyanonymous 19a64d6291 Cleanup some mac related code. 2024-12-25 05:32:51 -05:00
comfyanonymous b486885e08 Disable bfloat16 on older mac. 2024-12-25 05:18:50 -05:00
comfyanonymous 0229228f3f Clean up the VAE dtypes code. 2024-12-25 04:50:34 -05:00
comfyanonymous 15564688ed Add a try except block so if torch version is weird it won't crash. 2024-12-23 03:22:48 -05:00
Simon Lui c6b9c11ef6
Add oneAPI device selector for xpu and some other changes. (#6112)
* Add oneAPI device selector and some other minor changes.

* Fix device selector variable name.

* Flip minor version check sign.

* Undo changes to README.md.
2024-12-23 03:18:32 -05:00
comfyanonymous e44d0ac7f7 Make --novram completely offload weights.
This flag is mainly used for testing the weight offloading, it shouldn't
actually be used in practice.

Remove useless import.
2024-12-23 01:51:08 -05:00
comfyanonymous 57f330caf9 Relax minimum ratio of weights loaded in memory on nvidia.
This should make it possible to do higher res images/longer videos by
further offloading weights to CPU memory.

Please report an issue if this slows down things on your system.
2024-12-22 03:06:37 -05:00
Chenlei Hu d7969cb070
Replace print with logging (#6138)
* Replace print with logging

* nit

* nit

* nit

* nit

* nit

* nit
2024-12-20 16:24:55 -05:00
comfyanonymous 2dda7c11a3 More proper fix for the memory issue. 2024-12-19 16:21:56 -05:00
comfyanonymous 3ad3248ad7 Fix lowvram bug when using a model multiple times in a row.
The memory system would load an extra 64MB each time until either the
model was completely in memory or OOM.
2024-12-19 16:04:56 -05:00
comfyanonymous 37e5390f5f Add: --use-sage-attention to enable SageAttention.
You need to have the library installed first.
2024-12-18 01:56:10 -05:00
Chenlei Hu d9d7f3c619
Lint all unused variables (#5989)
* Enable F841

* Autofix

* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
comfyanonymous fd5dfb812c Set initial load devices for te and model to mps device on mac. 2024-12-12 06:00:31 -05:00