Sdxl benchmark. Let's dive into the details! Major Highlights: One of the standout additions in this update is the experimental support for Diffusers. Sdxl benchmark

 
Let's dive into the details! Major Highlights: One of the standout additions in this update is the experimental support for DiffusersSdxl benchmark 1

First, let’s start with a simple art composition using default parameters to. 0: Guidance, Schedulers, and Steps. Run time and cost. 8 min read. It takes me 6-12min to render an image. You can not prompt for specific plants, head / body in specific positions. 24GB VRAM. To gauge the speed difference we are talking about, generating a single 1024x1024 image on an M1 Mac with SDXL (base) takes about a minute. Every image was bad, in a different way. This architectural finesse and optimized training parameters position SSD-1B as a cutting-edge model in text-to-image generation. This suggests the need for additional quantitative performance scores, specifically for text-to-image foundation models. For our tests, we’ll use an RTX 4060 Ti 16 GB, an RTX 3080 10 GB, and an RTX 3060 12 GB graphics card. SDXL is supposedly better at generating text, too, a task that’s historically. 9. r/StableDiffusion. Even less VRAM usage - Less than 2 GB for 512x512 images on ‘low’ VRAM usage setting (SD 1. In a notable speed comparison, SSD-1B achieves speeds up to 60% faster than the foundational SDXL model, a performance benchmark observed on A100 80GB and RTX 4090 GPUs. 0 is still in development: The architecture of SDXL 1. 9, produces visuals that are more realistic than its predecessor. 0 Has anyone been running SDXL on their 3060 12GB? I'm wondering how fast/capable it is for different resolutions in SD. 🧨 DiffusersI think SDXL will be the same if it works. 10 k+. Salad. At higher (often sub-optimal) resolutions (1440p, 4K etc) the 4090 will show increasing improvements compared to lesser cards. 9. Instructions:. Stable Diffusion XL (SDXL) Benchmark – 769 Images Per Dollar on Salad. Stability AI. A Big Data clone detection benchmark that consists of known true and false positive clones in a Big Data inter-project Java repository and it is shown how the. I am playing with it to learn the differences in prompting and base capabilities but generally agree with this sentiment. I figure from the related PR that you have to use --no-half-vae (would be nice to mention this in the changelog!). . Benchmark Results: GTX 1650 is the Surprising Winner As expected, our nodes with higher end GPUs took less time per image, with the flagship RTX 4090 offering the best performance. 0 outputs. Scroll down a bit for a benchmark graph with the text SDXL. For our tests, we’ll use an RTX 4060 Ti 16 GB, an RTX 3080 10 GB, and an RTX 3060 12 GB graphics card. 0, it's crucial to understand its optimal settings: Guidance Scale. ” Stable Diffusion SDXL 1. 5 when generating 512, but faster at 1024, which is considered the base res for the model. People of every background will soon be able to create code to solve their everyday problems and improve their lives using AI, and we’d like to help make this happen. macOS 12. Horns, claws, intimidating physiques, angry faces, and many other traits are very common, but there's a lot of variation within them all. Did you run Lambda's benchmark or just a normal Stable Diffusion version like Automatic's? Because that takes about 18. SDXL-VAE-FP16-Fix was created by finetuning the SDXL-VAE to: 1. Next supports two main backends: Original and Diffusers which can be switched on-the-fly: Original: Based on LDM reference implementation and significantly expanded on by A1111. Dynamic engines generally offer slightly lower performance than static engines, but allow for much greater flexibility by. SD1. Stable Diffusion XL (SDXL) Benchmark . This repository comprises: python_coreml_stable_diffusion, a Python package for converting PyTorch models to Core ML format and performing image generation with Hugging Face diffusers in Python. . After. I have seen many comparisons of this new model. It supports SD 1. Stability AI has released the latest version of its text-to-image algorithm, SDXL 1. I the past I was training 1. Performance per watt increases up to. For awhile it deserved to be, but AUTO1111 severely shat the bed, in terms of performance in version 1. py, then delete venv folder and let it redownload everything next time you run it. 217. One way to make major improvements would be to push tokenization (and prompt use) of specific hand poses, as they have more fixed morphology - i. Only works with checkpoint library. modules. 8. Please share if you know authentic info, otherwise share your empirical experience. Running on cpu upgrade. The images generated were of Salads in the style of famous artists/painters. 0 with a few clicks in SageMaker Studio. This might seem like a dumb question, but I've started trying to run SDXL locally to see what my computer was able to achieve. 9: The weights of SDXL-0. 1 OS Loader Version: 8422. 5 had just one. Size went down from 4. arrow_forward. Compare base models. g. 1mo. If you have custom models put them in a models/ directory where the . 19it/s (after initial generation). The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. Next, all you need to do is download these two files into your models folder. Stable Diffusion XL (SDXL) Benchmark – 769 Images Per Dollar on Salad. Over the benchmark period, we generated more than 60k images, uploading more than 90GB of content to our S3 bucket, incurring only $79 in charges from Salad, which is far less expensive than using an A10g on AWS, and orders of magnitude cheaper than fully managed services like the Stability API. You can also vote for which image is better, this. Below we highlight two key factors: JAX just-in-time (jit) compilation and XLA compiler-driven parallelism with JAX pmap. 🧨 DiffusersThis is a benchmark parser I wrote a few months ago to parse through the benchmarks and produce a whiskers and bar plot for the different GPUs filtered by the different settings, (I was trying to find out which settings, packages were most impactful for the GPU performance, that was when I found that running at half precision, with xformers. 5 GHz, 24 GB of memory, a 384-bit memory bus, 128 3rd gen RT cores, 512 4th gen Tensor cores, DLSS 3 and a TDP of 450W. 0, the base SDXL model and refiner without any LORA. PC compatibility for SDXL 0. This GPU handles SDXL very well, generating 1024×1024 images in just. Between the lack of artist tags and the poor NSFW performance, SD 1. like 838. The Results. Unfortunately, it is not well-optimized for WebUI Automatic1111. The result: 769 hi-res images per dollar. 10 in series: ≈ 7 seconds. It can generate crisp 1024x1024 images with photorealistic details. The high end price/performance is actually good now. SDXL GPU Benchmarks for GeForce Graphics Cards. To put this into perspective, the SDXL model would require a comparatively sluggish 40 seconds to achieve the same task. Devastating for performance. It’ll be faster than 12GB VRAM, and if you generate in batches, it’ll be even better. But this bleeding-edge performance comes at a cost: SDXL requires a GPU with a minimum of 6GB of VRAM,. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. When all you need to use this is the files full of encoded text, it's easy to leak. The current benchmarks are based on the current version of SDXL 0. There are slight discrepancies between the output of SDXL-VAE-FP16-Fix and SDXL-VAE, but the decoded images should be close. The images generated were of Salads in the style of famous artists/painters. Running TensorFlow Stable Diffusion on Intel® Arc™ GPUs. 6. SDXL GPU Benchmarks for GeForce Graphics Cards. Automatically load specific settings that are best optimized for SDXL. For our tests, we’ll use an RTX 4060 Ti 16 GB, an RTX 3080 10 GB, and an RTX 3060 12 GB graphics card. And I agree with you. py in the modules folder. Hands are just really weird, because they have no fixed morphology. Too scared of a proper comparison eh. A 4080 is a generational leap from a 3080/3090, but a 4090 is almost another generational leap, making the 4090 honestly the best option for most 3080/3090 owners. SDXL performance does seem sluggish for SD 1. Base workflow: Options: Inputs are only the prompt and negative words. Thanks for sharing this. 5: SD v2. Stable Diffusion XL. Aesthetic is very subjective, so some will prefer SD 1. Unless there is a breakthrough technology for SD1. AdamW 8bit doesn't seem to work. Pertama, mari mulai dengan komposisi seni yang simpel menggunakan parameter default agar GPU kami mulai bekerja. First, let’s start with a simple art composition using default parameters to. 5 base model: 7. The 4060 is around 20% faster than the 3060 at a 10% lower MSRP and offers similar performance to the 3060-Ti at a. Seems like a good starting point. 5 did, not to mention 2 separate CLIP models (prompt understanding) where SD 1. 5 takes over 5. A meticulous comparison of images generated by both versions highlights the distinctive edge of the latest model. 6B parameter refiner model, making it one of the largest open image generators today. 0 が正式リリースされました この記事では、SDXL とは何か、何ができるのか、使ったほうがいいのか、そもそも使えるのかとかそういうアレを説明したりしなかったりします 正式リリース前の SDXL 0. 2. As the community eagerly anticipates further details on the architecture of. 2, along with code to get started with deploying to Apple Silicon devices. So it takes about 50 seconds per image on defaults for everything. The beta version of Stability AI’s latest model, SDXL, is now available for preview (Stable Diffusion XL Beta). Originally Posted to Hugging Face and shared here with permission from Stability AI. DreamShaper XL1. 4 GB, a 71% reduction, and in our opinion quality is still great. 1 at 1024x1024 which consumes about the same at a batch size of 4. 9 are available and subject to a research license. Stable Diffusion XL delivers more photorealistic results and a bit of text. Inside you there are two AI-generated wolves. 0. Please share if you know authentic info, otherwise share your empirical experience. Specifically, the benchmark addresses the increas-ing demand for upscaling computer-generated content e. Score-Based Generative Models for PET Image Reconstruction. keep the final output the same, but. image credit to MSI. How To Do SDXL LoRA Training On RunPod With Kohya SS GUI Trainer & Use LoRAs With Automatic1111 UI. 24GB GPU, Full training with unet and both text encoders. SDXL 1. 5 base model: 7. It's not my computer that is the benchmark. Join. SDXL GPU Benchmarks for GeForce Graphics Cards. 47 seconds. Adding optimization launch parameters. Compared to previous versions, SDXL is capable of generating higher-quality images. Sep 3, 2023 Sep 29, 2023. --lowvram: An even more thorough optimization of the above, splitting unet into many modules, and only one module is kept in VRAM. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware. 3. dll files in stable-diffusion-webui\venv\Lib\site-packages\torch\lib with the ones from cudnn-windows-x86_64-8. Further optimizations, such as the introduction of 8-bit precision, are expected to further boost both speed and accessibility. SDXL is a new version of SD. Building a great tech team takes more than a paycheck. Segmind's Path to Unprecedented Performance. Generate image at native 1024x1024 on SDXL, 5. 64 ;. via Stability AI. We cannot use any of the pre-existing benchmarking utilities to benchmark E2E stable diffusion performance,","# because the top-level StableDiffusionPipeline cannot be serialized into a single Torchscript object. It's also faster than the K80. 0 is particularly well-tuned for vibrant and accurate colors, with better contrast, lighting, and shadows than its predecessor, all in native 1024×1024 resolution. 0 (SDXL) and open-sourced it without requiring any special permissions to access it. The answer from our Stable Diffusion XL (SDXL) Benchmark: a resounding yes. The key to this success is the integration of NVIDIA TensorRT, a high-performance, state-of-the-art performance optimization framework. SDXL 1. ) Stability AI. Unless there is a breakthrough technology for SD1. make the internal activation values smaller, by. Benchmarking: More than Just Numbers. 10 in series: ≈ 10 seconds. 5 guidance scale, 50 inference steps Offload base pipeline to CPU, load refiner pipeline on GPU Refine image at 1024x1024, 0. ago. Now, with the release of Stable Diffusion XL, we’re fielding a lot of questions regarding the potential of consumer GPUs for serving SDXL inference at scale. 0 should be placed in a directory. Vanilla Diffusers, xformers => ~4. ","# Lowers performance, but only by a bit - except if live previews are enabled. SDXL can render some text, but it greatly depends on the length and complexity of the word. Yeah 8gb is too little for SDXL outside of ComfyUI. The generation time increases by about a factor of 10. Python Code Demo with Segmind SD-1B I ran several tests generating a 1024x1024 image using a 1. Best Settings for SDXL 1. Clip Skip results in a change to the Text Encoder. Found this Google Spreadsheet (not mine) with more data and a survey to fill. First, let’s start with a simple art composition using default parameters to. MASSIVE SDXL ARTIST COMPARISON: I tried out 208 different artist names with the same subject prompt for SDXL. 9 Release. a 20% power cut to a 3-4% performance cut, a 30% power cut to a 8-10% performance cut, and so forth. The chart above evaluates user preference for SDXL (with and without refinement) over Stable Diffusion 1. The SDXL model will be made available through the new DreamStudio, details about the new model are not yet announced but they are sharing a couple of the generations to showcase what it can do. Dynamic Engines can be configured for a range of height and width resolutions, and a range of batch sizes. SDXL v0. SD XL. 5, more training and larger data sets. 在过去的几周里,Diffusers 团队和 T2I-Adapter 作者紧密合作,在 diffusers 库上为 Stable Diffusion XL (SDXL) 增加 T2I-Adapter 的支持. Before SDXL came out I was generating 512x512 images on SD1. keep the final output the same, but. For additional details on PEFT, please check this blog post or the diffusers LoRA documentation. 在过去的几周里,Diffusers 团队和 T2I-Adapter 作者紧密合作,在 diffusers 库上为 Stable Diffusion XL (SDXL) 增加 T2I-Adapter 的支持. SD 1. 5 billion parameters, it can produce 1-megapixel images in different aspect ratios. Notes: ; The train_text_to_image_sdxl. Both are. 5 and 2. SDXL’s performance is a testament to its capabilities and impact. The 4080 is about 70% as fast as the 4090 at 4k at 75% the price. The enhancements added to SDXL translate into an improved performance relative to its predecessors, as shown in the following chart. For instance, the prompt "A wolf in Yosemite. Read More. 1. Generate an image of default size, add a ControlNet and a Lora, and AUTO1111 becomes 4x slower than ComfyUI with SDXL. 5 billion-parameter base model. Any advice i could try would be greatly appreciated. ; Use the LoRA with any SDXL diffusion model and the LCM scheduler; bingo! You get high-quality inference in just a few. You'll also need to add the line "import. 0. Würstchen V1, introduced previously, shares its foundation with SDXL as a Latent Diffusion model but incorporates a faster Unet architecture. 5 it/s. In a notable speed comparison, SSD-1B achieves speeds up to 60% faster than the foundational SDXL model, a performance benchmark observed on A100 80GB and RTX 4090 GPUs. The exact prompts are not critical to the speed, but note that they are within the token limit (75) so that additional token batches are not invoked. 5 base, juggernaut, SDXL. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. Stay tuned for more exciting tutorials!HPS v2: Benchmarking Text-to-Image Generative Models. Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters. Stable Diffusion 1. Stable Diffusion requires a minimum of 8GB of GPU VRAM (Video Random-Access Memory) to run smoothly. It shows that the 4060 ti 16gb will be faster than a 4070 ti when you gen a very big image. At 769 SDXL images per dollar, consumer GPUs on Salad’s distributed. Everything is. The newly released Intel® Extension for TensorFlow plugin allows TF deep learning workloads to run on GPUs, including Intel® Arc™ discrete graphics. You can use Stable Diffusion locally with a smaller VRAM, but you have to set the image resolution output to pretty small (400px x 400px) and use additional parameters to counter the low VRAM. So of course SDXL is gonna go for that by default. For our tests, we’ll use an RTX 4060 Ti 16 GB, an RTX 3080 10 GB, and an RTX 3060 12 GB graphics card. safetensors file from the Checkpoint dropdown. 0 (SDXL), its next-generation open weights AI image synthesis model. Your Path to Healthy Cloud Computing ~ 90 % lower cloud cost. 6. (5) SDXL cannot really seem to do wireframe views of 3d models that one would get in any 3D production software. First, let’s start with a simple art composition using default parameters to. The new Cloud TPU v5e is purpose-built to bring the cost-efficiency and performance required for large-scale AI training and inference. For those purposes, you. Next supports two main backends: Original and Diffusers which can be switched on-the-fly: Original: Based on LDM reference implementation and significantly expanded on by A1111. Running on cpu upgrade. After searching around for a bit I heard that the default. the 40xx cards SUCK at SD (benchmarks show this weird effect), even though they have double-the-tensor-cores (roughly double-tensor-per RT-core) (2nd column for frame interpolation), i guess, the software support is just not there, but the math+acelleration argument still holds. 9, Dreamshaper XL, and Waifu Diffusion XL. SDXL GPU Benchmarks for GeForce Graphics Cards. Benchmarking: More than Just Numbers. tl;dr: We use various formatting information from rich text, including font size, color, style, and footnote, to increase control of text-to-image generation. Training T2I-Adapter-SDXL involved using 3 million high-resolution image-text pairs from LAION-Aesthetics V2, with training settings specifying 20000-35000 steps, a batch size of 128 (data parallel with a single GPU batch size of 16), a constant learning rate of 1e-5, and mixed precision (fp16). 1024 x 1024. Despite its advanced features and model architecture, SDXL 0. Stable Diffusion XL (SDXL) Benchmark shows consumer GPUs can serve SDXL inference at scale. r/StableDiffusion. 6. タイトルは釣りです 日本時間の7月27日早朝、Stable Diffusion の新バージョン SDXL 1. Also memory requirements—especially for model training—are disastrous for owners of older cards with less VRAM (this issue will disappear soon as better cards will resurface on second hand. The Collective Reliability Factor Chance of landing tails for 1 coin is 50%, 2 coins is 25%, 3. Using my normal Arguments --xformers --opt-sdp-attention --enable-insecure-extension-access --disable-safe-unpickle Scroll down a bit for a benchmark graph with the text SDXL. Stability AI has released its latest product, SDXL 1. 3 seconds per iteration depending on prompt. ; Prompt: SD v1. SDXL’s performance has been compared with previous versions of Stable Diffusion, such as SD 1. 🔔 Version : SDXL. The new version generates high-resolution graphics while using less processing power and requiring fewer text inputs. This is the default backend and it is fully compatible with all existing functionality and extensions. 1. For our tests, we’ll use an RTX 4060 Ti 16 GB, an RTX 3080 10 GB, and an RTX 3060 12 GB graphics card. SD-XL Base SD-XL Refiner. SDXL 1. 0, an open model representing the next evolutionary step in text-to-image generation models. The RTX 4090 costs 33% more than the RTX 4080, but its overall specs far exceed that 33%. The bigger the images you generate, the worse that becomes. SDXL - The Best Open Source Image Model The Stability AI team takes great pride in introducing SDXL 1. Turn on torch. Close down the CMD window and browser ui. 10:13 PM · Jun 27, 2023. 1 in all but two categories in the user preference comparison. 1 in all but two categories in the user preference comparison. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. I find the results interesting for. First, let’s start with a simple art composition using default parameters to. (This is running on Linux, if I use Windows and diffusers etc then it’s much slower, about 2m30 per image) 1. Conclusion. 4K resolution: RTX 4090 is 124% faster than GTX 1080 Ti. Instead, Nvidia will leave it up to developers to natively support SLI inside their games for older cards, the RTX 3090 and "future SLI-capable GPUs," which more or less means the end of the road. Wurzelrenner. scaling down weights and biases within the network. Optimized for maximum performance to run SDXL with colab free. Evaluation. 42 12GB. 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ Thank you for watching! please consider to subs. workflow_demo. In this SDXL benchmark, we generated 60. Read More. 10 k+. 3. 1. There have been no hardware advancements in the past year that would render the performance hit irrelevant. Hires. Dhanshree Shripad Shenwai. Performance benchmarks have already shown that the NVIDIA TensorRT-optimized model outperforms the baseline (non-optimized) model on A10, A100, and. The time it takes to create an image depends on a few factors, so it's best to determine a benchmark, so you can compare apples to apples. Benchmark GPU SDXL untuk Kartu Grafis GeForce. (close-up editorial photo of 20 yo woman, ginger hair, slim American. 1Ever since SDXL came out and first tutorials how to train loras were out, I tried my luck getting a likeness of myself out of it. To stay compatible with other implementations we use the same numbering where 1 is the default behaviour and 2 skips 1 layer. In #22, SDXL is the only one with the sunken ship, etc. Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. The drivers after that introduced the RAM + VRAM sharing tech, but it. Double click the . bat' file, make a shortcut and drag it to your desktop (if you want to start it without opening folders) 10. Single image: < 1 second at an average speed of ≈27. Specs n numbers: Nvidia RTX 2070 (8GiB VRAM). It shows that the 4060 ti 16gb will be faster than a 4070 ti when you gen a very big image. VRAM Size(GB) Speed(sec. 5 in about 11 seconds each. With Stable Diffusion XL 1. SDXL Installation. These settings balance speed, memory efficiency. 8, 2023. I also tried with the ema version, which didn't change at all. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. 0 is supposed to be better (for most images, for most people running A/B test on their discord server. 9. The key to this success is the integration of NVIDIA TensorRT, a high-performance, state-of-the-art performance optimization framework. ☁️ FIVE Benefits of a Distributed Cloud powered by gaming PCs: 1. I'm getting really low iterations per second a my RTX 4080 16GB. To generate an image, use the base version in the 'Text to Image' tab and then refine it using the refiner version in the 'Image to Image' tab. Please be sure to check out our blog post for. 3gb of vram at 1024x1024 while sd xl doesn't even go above 5gb. 1. The number of parameters on the SDXL base. I cant find the efficiency benchmark against previous SD models. Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. 5: Options: Inputs are the prompt, positive, and negative terms. Another low effort comparation using a heavily finetuned model, probably some post process against a base model with bad prompt. r/StableDiffusion • "1990s vintage colored photo,analog photo,film grain,vibrant colors,canon ae-1,masterpiece, best quality,realistic, photorealistic, (fantasy giant cat sculpture made of yarn:1. But yeah, it's not great compared to nVidia. After the SD1. Then again, the samples are generating at 512x512, not SDXL's minimum, and 1. 5 models and remembered they, too, were more flexible than mere loras. SDXL Benchmark with 1,2,4 batch sizes (it/s): SD1. 5 it/s. py" and beneath the list of lines beginning in "import" or "from" add these 2 lines: torch. 0 introduces denoising_start and denoising_end options, giving you more control over the denoising process for fine. Software. 0 Launch Event that ended just NOW. I just built a 2080 Ti machine for SD. 5 and 2. This is the default backend and it is fully compatible with all existing functionality and extensions. Moving on to 3D rendering, Blender is a popular open-source rendering application, and we're using the latest Blender Benchmark, which uses Blender 3. I'm able to generate at 640x768 and then upscale 2-3x on a GTX970 with 4gb vram (while running. 5 bits per parameter. ago. Details: A1111 uses Intel OpenVino to accelate generation speed (3 sec for 1 image), but it needs time for preparation and warming up. We’ve tested it against various other models, and the results are. stability-ai / sdxl A text-to-image generative AI model that creates beautiful images Public; 20. This benchmark was conducted by Apple and Hugging Face using public beta versions of iOS 17. In a groundbreaking advancement, we have unveiled our latest. py implements the InstructPix2Pix training procedure while being faithful to the original implementation we have only tested it on a small-scale. Let's dive into the details! Major Highlights: One of the standout additions in this update is the experimental support for Diffusers. SDXL-VAE-FP16-Fix was created by finetuning the SDXL-VAE to: 1. 1: SDXL ; 1: Stunning sunset over a futuristic city, with towering skyscrapers and flying vehicles, golden hour lighting and dramatic clouds, high detail, moody atmosphere Serving SDXL with JAX on Cloud TPU v5e with high performance and cost-efficiency is possible thanks to the combination of purpose-built TPU hardware and a software stack optimized for performance. The SDXL model represents a significant improvement in the realm of AI-generated images, with its ability to produce more detailed, photorealistic images, excelling even in challenging areas like. 9 are available and subject to a research license. This will increase speed and lessen VRAM usage at almost no quality loss. 0 is still in development: The architecture of SDXL 1. In this Stable Diffusion XL (SDXL) benchmark, consumer GPUs (on SaladCloud) delivered 769 images per dollar - the highest among popular clouds.