diff --git a/README.md b/README.md index 615d892..837275a 100644 --- a/README.md +++ b/README.md @@ -29,7 +29,7 @@ API and command-line option may change frequently.*** ## Features -- Plain C/C++ implementation based on [ggml](https://github.com/ggerganov/ggml), working in the same way as [llama.cpp](https://github.com/ggerganov/llama.cpp) +- Plain C/C++ implementation based on [ggml](https://github.com/ggml-org/ggml), working in the same way as [llama.cpp](https://github.com/ggml-org/llama.cpp) - Super lightweight and without external dependencies - Supported models - Image Models @@ -165,7 +165,7 @@ Thank you to all the people who have already contributed to stable-diffusion.cpp ## References -- [ggml](https://github.com/ggerganov/ggml) +- [ggml](https://github.com/ggml-org/ggml) - [diffusers](https://github.com/huggingface/diffusers) - [stable-diffusion](https://github.com/CompVis/stable-diffusion) - [sd3-ref](https://github.com/Stability-AI/sd3-ref) diff --git a/docs/build.md b/docs/build.md index 02889ca..1ba582d 100644 --- a/docs/build.md +++ b/docs/build.md @@ -157,7 +157,7 @@ ninja ## Build with SYCL -Using SYCL makes the computation run on the Intel GPU. Please make sure you have installed the related driver and [IntelĀ® oneAPI Base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) before start. More details and steps can refer to [llama.cpp SYCL backend](https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#linux). +Using SYCL makes the computation run on the Intel GPU. Please make sure you have installed the related driver and [IntelĀ® oneAPI Base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) before start. More details and steps can refer to [llama.cpp SYCL backend](https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/SYCL.md#linux). ```shell # Export relevant ENV variables