mirror of
https://github.com/leejet/stable-diffusion.cpp.git
synced 2025-12-12 13:28:37 +00:00
fix: typo in --lora-apply-mode help (#972)
This commit is contained in:
parent
aa44e06890
commit
e8eb3791c8
@ -103,7 +103,7 @@ Options:
|
||||
contain any quantized parameters, the at_runtime mode will be used; otherwise,
|
||||
immediately will be used.The immediately mode may have precision and
|
||||
compatibility issues with quantized parameters, but it usually offers faster inference
|
||||
speed and, in some cases, lower memory usageThe at_runtime mode, on the other
|
||||
speed and, in some cases, lower memory usage. The at_runtime mode, on the other
|
||||
hand, is exactly the opposite.
|
||||
--scheduler denoiser sigma scheduler, one of [discrete, karras, exponential, ays, gits, smoothstep, sgm_uniform, simple], default:
|
||||
discrete
|
||||
|
||||
@ -1144,7 +1144,7 @@ void parse_args(int argc, const char** argv, SDParams& params) {
|
||||
"the way to apply LoRA, one of [auto, immediately, at_runtime], default is auto. "
|
||||
"In auto mode, if the model weights contain any quantized parameters, the at_runtime mode will be used; otherwise, immediately will be used."
|
||||
"The immediately mode may have precision and compatibility issues with quantized parameters, "
|
||||
"but it usually offers faster inference speed and, in some cases, lower memory usage"
|
||||
"but it usually offers faster inference speed and, in some cases, lower memory usage. "
|
||||
"The at_runtime mode, on the other hand, is exactly the opposite.",
|
||||
on_lora_apply_mode_arg},
|
||||
{"",
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user