287 Commits

Author SHA1 Message Date
Richard Palethorpe
a6a8569ea0
feat: Add SYCL Dockerfile (#651) 2025-09-14 13:02:59 +08:00
Erik Scholz
9e7befa320
fix: harden for large files (#643) master-9e7befa 2025-09-14 12:44:19 +08:00
Wagner Bruna
c607fc3ed4
feat: use Euler sampling by default for SD3 and Flux (#753)
Thank you for your contribution.
master-c607fc3
2025-09-14 12:34:41 +08:00
Wagner Bruna
b54bec3f18
fix: do not force VAE type to f32 on SDXL (#716)
This seems to be a leftover from the initial SDXL support: it's
not enough to avoid NaN issues, and it's not not needed for the
fixed sdxl-vae-fp16-fix .
master-b54bec3
2025-09-14 12:19:59 +08:00
Wagner Bruna
5869987fe4
fix: make weight override more robust against ggml changes (#760) master-5869987 2025-09-14 12:15:53 +08:00
Wagner Bruna
48956ffb87
feat: reduce CLIP memory usage with no embeddings (#768) master-48956ff 2025-09-14 12:08:00 +08:00
Wagner Bruna
ddc4a18b92
fix: make tiled VAE reuse the compute buffer (#821) master-ddc4a18 2025-09-14 11:41:50 +08:00
leejet
fce6afcc6a
feat: add sd3 flash attn support (#815) master-fce6afc 2025-09-11 23:24:29 +08:00
Erik Scholz
49d6570c43
feat: add SmoothStep Scheduler (#813) master-49d6570 2025-09-11 23:17:46 +08:00
clibdev
6bbaf161ad
chore: add install() support in CMakeLists.txt (#540) master-6bbaf16 2025-09-11 22:24:16 +08:00
clibdev
87cdbd5978
feat: use log_printf to print ggml logs (#545) master-87cdbd5 2025-09-11 22:16:05 +08:00
leejet
b017918106
chore: remove sd3 flash attention warn (#812) master-b017918 2025-09-10 22:21:02 +08:00
Wagner Bruna
ac5a215998
fix: use {} for params init instead of memset (#781) master-ac5a215 2025-09-10 21:49:29 +08:00
Wagner Bruna
abb36d66b5
chore: update flash attention warnings (#805) master-abb36d6 2025-09-10 21:38:21 +08:00
Wagner Bruna
ff4fdbb88d
fix: accept NULL in sd_img_gen_params_t::input_id_images_path (#809) master-ff4fdbb 2025-09-10 21:22:55 +08:00
Markus Hartung
abb115cd02
fix: clarify lora quant support and small fixes (#792) master-abb115c 2025-09-08 22:39:25 +08:00
leejet
c648001030
feat: add detailed tensor loading time stat (#793) master-c648001 2025-09-07 22:51:44 +08:00
stduhpf
c587a43c99
feat: support incrementing ref image index (omni-kontext) (#755)
* kontext: support  ref images indices

* lora: support x_embedder

* update help message

* Support for negative indices

* support for OmniControl (offsets at index 0)

* c++11 compat

* add --increase-ref-index option

* simplify the logic and fix some issues

* update README.md

* remove unused variable

---------

Co-authored-by: leejet <leejet714@gmail.com>
master-c587a43
2025-09-07 22:35:16 +08:00
leejet
f8fe4e7db9
fix: add flash attn support check (#803) master-f8fe4e7 2025-09-07 21:29:06 +08:00
leejet
1c07fb6fb1 docs: update docs/wan.md 2025-09-07 12:07:20 +08:00
leejet
675208dcb6 chore: update to c++17 master-675208d 2025-09-07 12:04:17 +08:00
leejet
d7f430cd69 docs: update docs and help message master-d7f430c 2025-09-07 02:26:44 +08:00
stduhpf
141a4b4113
feat: add flow shift parameter (for SD3 and Wan) (#780)
* Add flow shift parameter (for SD3 and Wan)

* unify code style and fix some issues

---------

Co-authored-by: leejet <leejet714@gmail.com>
master-141a4b4
2025-09-07 02:16:59 +08:00
stduhpf
21ce9fe2cf
feat: add support for timestep boundary based automatic expert routing in Wan MoE (#779)
* Wan MoE: Automatic expert routing based on timestep boundary

* unify code style and fix some issues

---------

Co-authored-by: leejet <leejet714@gmail.com>
master-21ce9fe
2025-09-07 01:44:10 +08:00
leejet
cb1d975e96
feat: add wan2.1/2.2 support (#778)
* add wan vae suppport

* add wan model support

* add umt5 support

* add wan2.1 t2i support

* make flash attn work with wan

* make wan a little faster

* add wan2.1 t2v support

* add wan gguf support

* add offload params to cpu support

* add wan2.1 i2v support

* crop image before resize

* set default fps to 16

* add diff lora support

* fix wan2.1 i2v

* introduce sd_sample_params_t

* add wan2.2 t2v support

* add wan2.2 14B i2v support

* add wan2.2 ti2v support

* add high noise lora support

* sync: update ggml submodule url

* avoid build failure on linux

* avoid build failure

* update ggml

* update ggml

* fix sd_version_is_wan

* update ggml, fix cpu im2col_3d

* fix ggml_nn_attention_ext mask

* add cache support to ggml runner

* fix the issue of illegal memory access

* unify image loading processing

* add wan2.1/2.2 FLF2V support

* fix end_image mask

* update to latest ggml

* add GGUFReader

* update docs
master-cb1d975
2025-09-06 18:08:03 +08:00
Wagner Bruna
2eb3845df5
fix: typo in the verbose long flag (#783) master-2eb3845 2025-09-04 00:49:01 +08:00
stduhpf
4c6475f917
feat: show usage on unknown arg (#767) master-4c6475f 2025-09-01 21:38:34 +08:00
SmallAndSoft
f0fa7ddc40
docs: add compile option needed by Ninja (#770) 2025-09-01 21:35:25 +08:00
SmallAndSoft
a7c7905c6d
docs: add missing dash to docs/chroma.md (#771) 2025-09-01 21:34:34 +08:00
Wagner Bruna
eea77cbad9
feat: throttle model loading progress updates (#782)
Some terminals have slow display latency, so frequent output
during model loading can actually slow down the process.

Also, since tensor loading times can vary a lot, the progress
display now shows the average across past iterations instead
of just the last one.
master-eea77cb
2025-09-01 21:32:01 +08:00
NekopenDev
0e86d90ee4
chore: add Nvidia 30 series (cuda arch 86) to build master-0e86d90 2025-09-01 21:21:34 +08:00
leejet
5900ef6605 sync: update ggml, make cuda im2col a little faster 2025-08-03 01:29:40 +08:00
Daniele
5b8996f74a
Conv2D direct support (#744)
* Conv2DDirect for VAE stage

* Enable only for Vulkan, reduced duplicated code

* Cmake option to use conv2d direct

* conv2d direct always on for opencl

* conv direct as a flag

* fix merge typo

* Align conv2d behavior to flash attention's

* fix readme

* add conv2d direct for controlnet

* add conv2d direct for esrgan

* clean code, use enable_conv2d_direct/get_all_blocks

* format code

---------

Co-authored-by: leejet <leejet714@gmail.com>
master-5b8996f
2025-08-03 01:25:17 +08:00
Wagner Bruna
f7f05fb185
chore: avoid setting GGML_MAX_NAME when building against external ggml (#751)
An external ggml will most likely have been built with the default
GGML_MAX_NAME value (64), which would be inconsistent with the value
set by our build (128). That would be an ODR violation, and it could
easily cause memory corruption issues due to the different
sizeof(struct ggml_tensor) values.

For now, when linking against an external ggml, we demand it has been
patched with a bigger GGML_MAX_NAME, since we can't check against a
value defined only at build time.
master-f7f05fb
2025-08-03 01:24:40 +08:00
Seas0
6167e2927a
feat: support build against system installed GGML library (#749) master-6167e29 2025-08-02 11:03:18 +08:00
leejet
f6b9aa1a43 refector: optimize the usage of tensor_types master-f6b9aa1 2025-07-28 23:18:29 +08:00
Wagner Bruna
7eb30d00e5
feat: add missing models and parameters to image metadata (#743)
* feat: add new scheduler types, clip skip and vae to image embedded params

- If a non default scheduler is set, include it in the 'Sampler' tag in the data
embedded into the final image.
- If a custom VAE path is set, include the vae name (without path and extension)
in embedded image params under a `VAE:` tag.
- If a custom Clip skip is set, include that Clip skip value in embedded image
params under a `Clip skip:` tag.

* feat: add separate diffusion and text models to metadata

---------

Co-authored-by: one-lithe-rune <skapusniak@lithe-runes.com>
master-7eb30d0
2025-07-28 22:00:27 +08:00
stduhpf
59080d3ce1
feat: change image dimensions requirement for DiT models (#742) master-59080d3 2025-07-28 21:58:17 +08:00
R0CKSTAR
8c3c788f31
feat: upgrade musa sdk to rc4.2.0 (#732) 2025-07-28 21:51:11 +08:00
leejet
f54524f620 sync: update ggml 2025-07-28 21:50:12 +08:00
leejet
eed97a5e1d sync: update ggml master-eed97a5 2025-07-24 23:04:08 +08:00
Ettore Di Giacinto
fb86bf4cb0
docs: add LocalAI to README's UIs (#741) 2025-07-24 22:39:26 +08:00
leejet
bd1eaef93e fix: convert f64 to f32 and i64 to i32 when loading weights master-bd1eaef 2025-07-24 00:59:38 +08:00
Erik Scholz
ab835f7d39
fix: correct head dim check and L_k padding of flash attention (#736) master-ab835f7 2025-07-24 00:57:45 +08:00
Daniele
26f3f61d37
docs: add sd.cpp-webui as an available frontend (#738) 2025-07-23 23:51:57 +08:00
Oleg Skutte
1896b28ef2
fix: make --taesd work (#731) master-1896b28 2025-07-15 00:45:22 +08:00
leejet
0739361bfe fix: avoid macOS build failed master-0739361 2025-07-13 20:18:10 +08:00
leejet
ca0bd9396e
refactor: update c api (#728) 2025-07-13 18:48:42 +08:00
stduhpf
a772dca27a
feat: add Instruct-Pix2pix/CosXL-Edit support (#679)
* Instruct-p2p support

* support 2 conditionings cfg

* Do not re-encode the exact same image twice

* fixes for 2-cfg

* Fix pix2pix latent inputs + improve inpainting a bit + fix naming

* prepare for other pix2pix-like models

* Support sdxl ip2p

* fix reference image embeddings

* Support 2-cond cfg properly in cli

* fix typo in help

* Support masks for ip2p models

* unify code style

* delete unused code

* use edit mode

* add img_cond

* format code

---------

Co-authored-by: leejet <leejet714@gmail.com>
master-a772dca
2025-07-12 15:36:45 +08:00
Wagner Bruna
6d84a30c66
feat: overriding quant types for specific tensors on model conversion (#724) master-6d84a30 2025-07-08 00:11:38 +08:00