Web Stable Diffusion

This project brings stable diffusion models to web browsers. Everything runs inside the browser with no need of server support. To our knowledge, this is the the world’s first stable diffusion completely running on the browser. Please check out our GitHub repo to see how we did it. There is also a demo which you can try out.

  • WebGPU spec does comes with FP16 support already, but the implementation does not yet support this feature at this moment. As a result, the memory consumption of running the demo is about 7GB. For Apple silicon Mac with only 8GB of unified memory, it may take longer (a few minutes) to generate an image. This demo may also work for Mac with AMD GPU.
  • Demo

    Select Model:
    Input prompt:
    Negative prompt (optional):
    Select scheduler -
    Render intermediate steps (may slow down execution) -


    This project brings stable diffusion models to web browsers. Everything runs inside the browser with no need of server support. To our knowledge, this is the the world’s first stable diffusion completely running on the browser. Please check out our GitHub repo to see how we did it. There is also a demo which you can try out.

    Browser screenshot

    We have been seeing amazing progress through AI models recently. Thanks to the open-source effort, developers can now easily compose open-source models together to produce amazing tasks. Stable diffusion enables the automatic creation of photorealistic images as well as images in various styles based on text input. These models are usually big and compute-heavy, which means we have to pipe through all computation requests to (GPU) servers when developing web applications based on these models. Additionally, most of the workloads have to run on a specific type of GPUs where popular deep-learning frameworks are readily available.

    This project takes a step to change that status quo and bring more diversity to the ecosystem. There are a lot of reasons to get some (or all) of the computation to the client side. There are many possible benefits, such as cost reduction on the service provider side, as well as an enhancement for personalization and privacy protection. The development of personal computers (even mobile devices) is going in the direction that enables such possibilities. The client side is getting pretty powerful. For example, the latest MacBook Pro can have up to 96GB of unified RAM that can be used to store the model weights and a reasonably powerful GPU to run many of the workloads.

    Wouldn’t it be fun to directly bring the ML models to the client, have the user open a browser tab, and instantly run the stable diffusion models on the browser? This project provides the first affirmative answer to this question.

    Text to Image Generation Demo

    Because WebGPU is not yet fully stable, nor have there ever been such large-scale AI models running on top of WebGPU, so we are testing the limit here. It may not work in your environment. So far, we have only tested it on Mac with M1/M2 GPUs in Chrome Canary (a nightly build of Chrome) because WebGPU is quite new. We have tested on Windows and it does not work at this moment due to possible driver issues. We anticipate the support broadens as WebGPU matures. Please check out the use instructions and notes below.

    Instructions

    If you have a Mac computer with Apple silicon, here are the instructions for you to run stable diffusion on your browser locally:

    Notes

    Disclaimer

    This demo site is for research purposes only. Please conform to the uses of stable diffusion models.