leejet
ba8c92a6b8
update docs
2025-11-29 15:16:41 +08:00
leejet
2ddbfe5dde
add Flux2FlowDenoiser
2025-11-29 15:07:06 +08:00
leejet
7a2a7d0767
rename qwenvl to llm
2025-11-29 14:06:46 +08:00
leejet
66e27de9bd
add flux2 support
2025-11-29 02:20:12 +08:00
leejet
20345888a3
refactor: optimize the handling of sample method ( #999 )
master-377-2034588
2025-11-22 14:00:25 +08:00
akleine
490c51d963
feat: report success/failure when saving PNG/JPG output ( #912 )
master-376-490c51d
2025-11-22 13:57:44 +08:00
Wagner Bruna
45c46779af
feat: add LCM scheduler ( #983 )
master-375-45c4677
2025-11-22 13:53:31 +08:00
leejet
869d023416
refactor: optimize the handling of scheduler ( #998 )
2025-11-22 12:48:53 +08:00
akleine
e9bc3b6c06
fix: check the PhotoMaker id_embeds tensor ONLY in PhotoMaker V2 mode ( #987 )
master-373-e9bc3b6
2025-11-22 12:47:40 +08:00
Wagner Bruna
b542894fb9
fix: avoid crash on default video preview path ( #997 )
...
Co-authored-by: masamaru-san
master-372-b542894
2025-11-22 12:46:27 +08:00
leejet
5498cc0d67
feat: add Wan2.1-I2V-1.3B(SkyReels) support ( #988 )
master-371-5498cc0
2025-11-19 23:56:46 +08:00
stduhpf
aa2b8e0ca5
fix: patch 1x1 conv weights at runtime ( #986 )
master-370-aa2b8e0
2025-11-19 23:27:23 +08:00
rmatif
a14e2b321d
feat: add easycache support ( #940 )
master-369-a14e2b3
2025-11-19 23:19:32 +08:00
leejet
28ffb6c13d
fix: resolve issue with concat multiple LoRA output diffs at runtime ( #985 )
master-368-28ffb6c
2025-11-17 22:56:07 +08:00
leejet
b88cc32346
fix: avoid using same type but diff instances for rng and sampler_rng ( #982 )
master-367-b88cc32
2025-11-16 23:37:14 +08:00
leejet
f532972d60
fix: avoid precision issues on vulkan backend ( #980 )
master-366-f532972
2025-11-16 20:57:08 +08:00
leejet
d5b05f70c6
feat: support independent sampler rng ( #978 )
master-365-d5b05f7
2025-11-16 17:11:02 +08:00
akleine
6d6dc1b8ed
fix: make PhotoMakerV2 more robust by image count check ( #970 )
master-364-6d6dc1b
2025-11-16 17:10:48 +08:00
Wagner Bruna
199e675cc7
feat: support for --tensor-type-rules on generation modes ( #932 )
master-363-199e675
2025-11-16 17:07:32 +08:00
leejet
742a7333c3
feat: add cpu rng ( #977 )
master-362-742a733
2025-11-16 14:48:15 +08:00
Wagner Bruna
e8eb3791c8
fix: typo in --lora-apply-mode help ( #972 )
master-361-e8eb379
2025-11-16 14:48:00 +08:00
Wagner Bruna
aa44e06890
fix: avoid crash with LoRAs and type override ( #974 )
master-360-aa44e06
2025-11-16 14:47:36 +08:00
Daniele
6448430dbb
feat: add break pseudo token support ( #422 )
...
---------
Co-authored-by: Urs Ganse <urs.ganse@helsinki.fi>
master-359-6448430
2025-11-16 14:45:20 +08:00
leejet
347710f68f
feat: support applying LoRA at runtime ( #969 )
master-358-347710f
2025-11-13 21:48:44 +08:00
lcy
59ebdf0bb5
chrore: enable Windows ROCm(HIP) build release ( #956 )
...
* build: fix missing commit sha in macOS and Ubuntu build zip name
The build workflows for macOS and Ubuntu incorrectly check for the
"main" branch instead of "master" when retrieving the commit hash for
naming the build artifacts.
* build: correct Vulkan SDK installation condition in build workflow
* build: Enable Windows ROCm(HIP) build release
Refer to the build workflow of llama.cpp to add a Windows ROCm (HIP)
build release to the workflow.
Since there are many differences between the HIP build and other
builds, this commit add a separate "windows-latest-cmake-hip" job,
instead of enabling the ROCm matrix entry in the existing Windows
build job.
Main differences include:
- Install ROCm SDK from AMD official installer.
- Add a cache step for ROCm installation and a ccache step for build
processing, since the HIP build takes much longer time than other
builds.
- Include the ROCm/HIP artifact in the release assets.
master-357-59ebdf0
2025-11-12 00:28:55 +08:00
Flavio Bizzarri
4ffcbcaed7
fix: specify enum modifier in sd_set_preview_callback signature ( #959 )
master-356-4ffcbca
2025-11-12 00:27:23 +08:00
leejet
694f0d9235
refactor: optimize the logic for name conversion and the processing of the LoRA model ( #955 )
master-355-694f0d9
2025-11-10 00:12:20 +08:00
stduhpf
8ecdf053ac
feat: add image preview support ( #522 )
master-354-8ecdf05
2025-11-10 00:12:02 +08:00
leejet
ee89afc878
fix: resolve issue with pmid ( #957 )
master-353-ee89afc
2025-11-09 22:47:53 +08:00
akleine
d2d3944f50
feat: add support for SD2.x with TINY U-Nets ( #939 )
master-352-d2d3944
2025-11-09 22:47:37 +08:00
akleine
0fa3e1a383
fix: prevent core dump in PM V2 in case of incomplete cmd line ( #950 )
master-351-0fa3e1a
2025-11-09 22:36:43 +08:00
leejet
c2d8ffc22c
fix: compatibility for models with modified tensor shapes ( #951 )
master-350-c2d8ffc
2025-11-07 23:04:41 +08:00
stduhpf
fb748bb8a4
fix: TAE encoding ( #935 )
master-349-fb748bb
2025-11-07 22:58:59 +08:00
leejet
8f6c5c217b
refactor: simplify the model loading logic ( #933 )
...
* remove String2GGMLType
* remove preprocess_tensor
* fix clip init
* simplify the logic for reading weights
master-348-8f6c5c2
2025-11-03 21:21:34 +08:00
leejet
6103d86e2c
refactor: introduce GGMLRunnerContext ( #928 )
...
* introduce GGMLRunnerContext
* add Flash Attention enable control through GGMLRunnerContext
* add conv2d_direct enable control through GGMLRunnerContext
master-347-6103d86
2025-11-02 02:11:04 +08:00
stduhpf
c42826b77c
fix: resolve multiple inpainting issues ( #926 )
...
* Fix inpainting masked image being broken by side effect
* Fix unet inpainting concat not being set
* Fix Flex.2 inpaint mode crash (+ use scale factor)
master-346-c42826b
2025-11-02 02:10:32 +08:00
Wagner Bruna
945d9a9ee3
docs: add Koboldcpp as an available UI ( #930 )
2025-11-02 02:03:01 +08:00
Wagner Bruna
353e708844
docs: update ggml and llama.cpp URLs ( #931 )
2025-11-02 02:02:44 +08:00
leejet
dd75fc081c
refactor: unify the naming style of ggml extension functions ( #921 )
master-343-dd75fc0
2025-10-28 23:26:48 +08:00
stduhpf
77eb95f8e4
docs: fix taesd direct download link ( #917 )
2025-10-28 23:26:23 +08:00
Wagner Bruna
8a45d0ff7f
chore: clean up stb includes ( #919 )
master-341-8a45d0f
2025-10-28 23:25:45 +08:00
leejet
9e28be6479
feat: add chroma radiance support ( #910 )
...
* add chroma radiance support
* fix ci
* simply generate_init_latent
* workaround: avoid ggml cuda error
* format code
* add chroma radiance doc
master-340-9e28be6
2025-10-25 23:56:14 +08:00
akleine
062490aa7c
feat: add SSD1B and tiny-sd support ( #897 )
...
* feat: add code and doc for running SSD1B models
* Added some more lines to support SD1.x with TINY U-Nets too.
* support SSD-1B.safetensors
* fix sdv1.5 diffusers format loader
---------
Co-authored-by: leejet <leejet714@gmail.com>
master-339-062490a
2025-10-25 23:35:54 +08:00
stduhpf
faabc5ad3c
feat: allow models to run without all text encoder(s) ( #645 )
master-338-faabc5a
2025-10-25 22:00:56 +08:00
leejet
69b9511ce9
sync: update ggml
2025-10-24 00:32:45 +08:00
stduhpf
917f7bfe99
fix: support --flow-shift for flux models with default pred ( #913 )
master-336-917f7bf
2025-10-23 21:35:18 +08:00
leejet
48e0a28ddf
feat: add shift factor support ( #903 )
master-335-48e0a28
2025-10-23 01:20:29 +08:00
leejet
d05e46ca5e
chore: add .clang-tidy configuration and apply modernize checks ( #902 )
master-334-d05e46c
2025-10-18 23:23:40 +08:00
Wagner Bruna
64a7698347
chore: report number of Qwen layers as info ( #901 )
master-333-64a7698
2025-10-18 23:22:01 +08:00
leejet
0723ee51c9
refactor: optimize option printing ( #900 )
master-332-0723ee5
2025-10-18 17:50:30 +08:00