mirror of
https://github.com/leejet/stable-diffusion.cpp.git
synced 2025-12-12 21:38:58 +00:00
docs: update ggml and llama.cpp URLs (#931)
This commit is contained in:
parent
dd75fc081c
commit
353e708844
@ -29,7 +29,7 @@ API and command-line option may change frequently.***
|
|||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- Plain C/C++ implementation based on [ggml](https://github.com/ggerganov/ggml), working in the same way as [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
- Plain C/C++ implementation based on [ggml](https://github.com/ggml-org/ggml), working in the same way as [llama.cpp](https://github.com/ggml-org/llama.cpp)
|
||||||
- Super lightweight and without external dependencies
|
- Super lightweight and without external dependencies
|
||||||
- Supported models
|
- Supported models
|
||||||
- Image Models
|
- Image Models
|
||||||
@ -165,7 +165,7 @@ Thank you to all the people who have already contributed to stable-diffusion.cpp
|
|||||||
|
|
||||||
## References
|
## References
|
||||||
|
|
||||||
- [ggml](https://github.com/ggerganov/ggml)
|
- [ggml](https://github.com/ggml-org/ggml)
|
||||||
- [diffusers](https://github.com/huggingface/diffusers)
|
- [diffusers](https://github.com/huggingface/diffusers)
|
||||||
- [stable-diffusion](https://github.com/CompVis/stable-diffusion)
|
- [stable-diffusion](https://github.com/CompVis/stable-diffusion)
|
||||||
- [sd3-ref](https://github.com/Stability-AI/sd3-ref)
|
- [sd3-ref](https://github.com/Stability-AI/sd3-ref)
|
||||||
|
|||||||
@ -157,7 +157,7 @@ ninja
|
|||||||
|
|
||||||
## Build with SYCL
|
## Build with SYCL
|
||||||
|
|
||||||
Using SYCL makes the computation run on the Intel GPU. Please make sure you have installed the related driver and [Intel® oneAPI Base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) before start. More details and steps can refer to [llama.cpp SYCL backend](https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#linux).
|
Using SYCL makes the computation run on the Intel GPU. Please make sure you have installed the related driver and [Intel® oneAPI Base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) before start. More details and steps can refer to [llama.cpp SYCL backend](https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/SYCL.md#linux).
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
# Export relevant ENV variables
|
# Export relevant ENV variables
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user