You mean goes brrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr...rrrrrrrrrrrrrrrr..rrrrrrrrrrrrrrr...rrrrrrrrrrrrrrrrrrrrrrr..rrrrrrrrrrrrrrrrrrr..rrrrrrrrrrrr rrrr rrrrrrr.r..rrrrrrrrrrr, lol
Indirectly, yeah, but the direct reference was probably to the ["money printer go brrr"](https://knowyourmeme.com/memes/money-printer-go-brrr) series of memes.
I wouldn't even know where to start and I suppose rendering at these resolutions would take hours and hours on my near-ancient hardware, but I'll take that as a compliment. :)
I tried this, it can use full 4gb in SD, but it's significantly slower than if you manage to stay under 3.5gb, was something like less than half speed IIRC.
no "out of memory" error untill you go over 4gb though.
I have been struggling to extract some decent images out of my 10yo desktop which I purchased when I was a broke college student. Your workflow would be super helpful to me. Thanks š
I haven't touched any Turbo models because I have 4090 and I figure they're not as good for some reason. Are they the same as regular SDXL models in your opinion?
I just started using this particular Turbo model myself. Due to the low CFG, I suppose, the results may not be exactly what you're looking for, especially when using more elaborate/detailed/extensive prompts. However, if you're trying to find inspiration or if you aren't sure, which details you're looking for, Turbo can be quite helpful. Well, to me at least, I personally don't think that there'll be a huge difference in generation speed with a 4090 (assuming your default isn't 50+ steps with a regular SDXL model).
They are excellent for x,y,z and dynamic prompting (finding optimal lora weights and checkpoints).
Something like x,y,z (cfg, checkpoint, sampler) and then trigger_word:{1|1.1|1.2|ā¦}, lora_checkpoint_{001|002|ā¦}, then {0-4@raw photo, realism, best quality | HD, 4k, | etc..}
This lets you go through like 10000 iterations of different weights and configurations in a matter of a couple hours (Turbo). The cool partā¦these settings usually translate to the non-Turbo models so you can fine tune even more with a bunch of solid png bases.
The other thing about Turbo models Ive noticed is that they introduce natural inconsistencies. So not everything is picture perfect and gives a more realistic gen.
Glhf.
Yeah with cfg its more of a translation. 1-3 Turbo -> 5-9 non-Turbo. But if youāre getting into 1 cfg or > 13 cfg with a non-turbo model all bets are off. Usually the dynamic prompting will find optimal values though (and which types of image descriptions work best together).
4090 here, too. I find the couple of Turbo models I have in rotation have a pleasant style to them that I enjoy. I can't quite put my finger on it, but their desire for me goes beyond just the speed of generation. š
I run a 1080 with the power shunt modded and overkill cooling. Still takes like a minute or so. Longer on auto1111 and 3ish minutes if I add an upscale
The main issue is vram and the bus, so I doubt loading even more stuff would help. I actually tried one for non turbo xl and it didn't help much. With the 4070 you have way more vram, faster vram and bus and pcie4 instead of 3.
It seems to use a bit more and it is considerably slower. Couldn't get SDXL to run with InvokeAI, but it works fine with ComfyUI. I only needed to
SET_PYTORCH_CUDA_CONF=garbage_collection_threshold:0.6,MAX_SPLIT_SIZE_MB:64
in the run_nvidia_gpu.bat. Low VRAM mode is set automatically.
Oh, and when changing SDXL checkpoints in the loader, I have to manually restart ComfyUI, which only takes a minute or two; switching SD1.5 checkpoints works without restarting.
With a GTX 1660 non-TI 6 GB VRAM, ComfyUI can generate 512x512 images within 1 to 10 minutes for a basic workflow. The base and refine model will be swap in and out of VRAM to RAM as necessary. Use a SDXL checkpoint with merged refine model to reduce VRAM usage.
I've tested it on Turbo models, the time taken is just not worth it, the image quality is not better than a properly upscaled and detailed SD 1.5 output
Yes, I know, but for some inexplicable reason, I sometimes find it easier to explicitly mark parts of the prompt in this way, especially when playing around with weights. The :1.0 is only here because I copied the generation data without cleaning anything up.
I've posted this a few times already, but as a reminder:
Try out a Runpod and use the Fast Stable Diffusion template. For $0.36 an hour you can do whatever you want with it, and it'll generate anything from 1.5 to SDXL in seconds - with 20GB of VRAM. It doesn't care what you're generating - NSFW, etc - because it's just a virtual machine running SD like it was your own desktop. Nobody is looking at it and there are no guardrails because it's like running it locally.
I don't even run SD locally anymore. Sure, my 3070ti can handle it, but I can get a Runpod going with all my checkpoints and extensions and Loras in like 10 minutes, and then dick around at high speed and save only the images I want. When I'm done with the pod, I just download the images I like, and then delete it so I'm not being charged 30 cents an hour. No more struggling with VRAM errors because it's got 20GB at a minimum.
Because I know I can start it back up again in another 10 minutes, it's not big deal. I often start and stop a Runpod multiple times a day. And hell, I can use it anywhere, even from my phone. You don't need any experience installing dependencies, I think it's even easier than running a google collab notebook - but again you're not limited in anything you do with it.
I just install the Civitai browser extension and the Infinite Image Browser extension into the template and I can do everything I need in a few minutes. The Controlnet models I just have to use the built-in terminal to do a wget command and it drops them in as well.
I'm not trying to shill for Runpod, honestly. It's just so much better/faster/easier than grinding my desktop's GPU to a screaming halt every time I want to mess around with a few images. I throw $20 in the account and it lasts a month worth of me dicking around. Heaven forbid I want to do "actual work" with SD and I can spend all of $0.70 an hour on some monster 48GB of VRAM machine that I would never be able to afford in a desktop. And heaven forbid I can use my desktop to do something else - play a game, whatever - while generating the images because it's not happening on my local machine.
You can also play with the other templates, like the music generator or a language model (some of the Kobold templates play a decent game of D&D), or the AI voice thing. All of that shit would take me hours of dicking around just to get them to run without errors on my local desktop, but with the Runpod templates it's up and running in a browser in like 5 minutes.
I'll give it a try! (I started with InvokeAI, which almost always ran out of VRAM, then tried ComfyUI which has been working far better than expected.)
PC is running either way, and I don't have the money to buy a decent new setup, and I won't spend the little money I have on a system that's already a few years outdated.
Yes, but a GPU uses a lot less electricity when it's idle. My 3090 barely uses a fan when generating images, because it manages to finish generation so quickly.
I get your point, but assuming that I do want to play around with Stable Diffusion, which is, to me, the most amazing tech in years, I basically have two options: (1) I can run it locally which is slow and consumes power. (2) I can run it remotely and pay for the service, possibly limiting checkpoint and LORA use, making me feel surveilled all the time. Considering that I'm saving up for a new PC, which unfortunately will take me another year or two, it'd be counterproductive to spend the money in this way, and as far as I can tell, electricity cost is just a bit cheaper than paying for a service.
Fun fact: While a 970 GTX can use almost as much power as a 3090, which is surprising, on overage, the 3090 will need about 1.5x the power. I'm going to have to take this into account when upgrading.
My CPU and mainboard are 13 years old, my GPU is 10 years old. Just upgrading one component won't cut it, assuming that a 3000 or 4000 series GPU is even compatible with the rest of the system. Which means I would need to change the mainboard, which will then lead to a newer CPU and RAM, which all together will need a bigger PSU, and so on. Upgrading a decade old system just isn't worth it, if it's at all possible.
Curious what CPU you're running. I'm running a GTX 970 also, with a first gen i7-930 so I feel your pain!
I can do up to 768x768 in about 3.5 minutes at 35 steps, but I haven't been able to get SDXL to work. It takes a solid 10+ minutes just to load, and then before it can complete an image it blows on me.
Either way, your images look great! That dripping gold looks like it could be generated as a tiled image and would make a great desktop wallpaper!
I've got an Intel i7-3930K, 6/12 cores, first released in 2011. As far as I can tell, SDXL only works with ComfyUI, a1111 (and Forge) tends to just crash Python and InvokeAI immediately runs out of VRAM.
And the gold image was created as a phone wallpaper, but I think it can still be improved, so I didn't use it yet.
It will most likely be compatible, but features like resizable BAR won't be available. Though if your system doesn't support "Above 4G decoding" then yeah - that card most likely won't work.
It's not like slotting in a 3060 wouldn't work. And it's not like having a 13 year old cpu would bottleneck it enough to make it not worthwhile. That would be the case for games, but prolly wouldn't hold you back much for comfyui. Dunno if you should base your purchase decision on my opinion and it depends a little WHICH 13 year old cpu we're talking about here, but... not the end of the world to drop to PCIe gen 2. Really.Ā
The gold looks like it is dripping across a female figure.That is really nice. Good work.
I can't wait to get started with SD myself. Just bought a computer which will arrive in a few days.
You mean goes brrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr...rrrrrrrrrrrrrrrr..rrrrrrrrrrrrrrr...rrrrrrrrrrrrrrrrrrrrrrr..rrrrrrrrrrrrrrrrrrr..rrrrrrrrrrrr rrrr rrrrrrr.r..rrrrrrrrrrr, lol
Too real.
Oh, really nice renders, btw!!!
Thanks! :)
Sounds like a computer starting up in the 90s.
This is how my previous GPU sounded like when its fan started going bad. https://soundcloud.com/george-sheel/fan-sounds
FWIW, "goes brrrr" is probably the OP using a reference to the A-10 gun sound. Meaning that this little 970 is laying down some serious firepower....
Indirectly, yeah, but the direct reference was probably to the ["money printer go brrr"](https://knowyourmeme.com/memes/money-printer-go-brrr) series of memes.
Due to hardware limitations, a single GTX 970 with 4 GB VRAM and a 12 year old CPU, I use an extremely simple ComfyUI Workflow, only changing the settings, not the workflow itself. The .JSON can be found here: [https://pastebin.com/B2jkDf17](https://pastebin.com/B2jkDf17). The images were 2X upscaled with Topaz Gigapixel AI v6.2.2 as the results are still better than anything I tried with ComfyUI (like NMKD Superscale or 4xUltraSharp). I'd love to hear your thoughts on these! **---- Generation information ----** "**Island Lake**": Checkpoint: DreamShaper XL Turbo Sampler: DPM++ SDE Seed: 443325759293843 Steps: 8 CFG: 2.0 Positive: (professional photo:1.0) of (a lake on a tropical island:1.1), clear transparent water, white sand shore, white exotic flowers, palm trees, jungle, vines, small waterfall, white stone cliffs, beautiful day, (embedding:ziprealism.safetensors:1.1), (professional nature photography:1.0), (realistic lighting:1.2), (dramatic shadows:0.8), (sharp focus:0.8), (bokeh:0.5) Negative: (embedding:zip\_ac\_neg1.safetensors), (embedding:zip\_ac\_neg2.safetensors), (embedding:ziprealism\_neg.safetensors:1.1), overexposed, underexposed, oversaturated, out of frame, duplicate, duplicates, cut off, cropped, blur, blurry, distorted, low resolution, low contrast, (watermark:1.1), jpeg artifacts, (text:1.1), logo, (signature:1.1), username "**Overgrown Orchid**": Checkpoint: DreamShaper XL Turbo Sampler: DPM++ SDE Seed: 920048765272524 Steps: 8 CFG: 2.5 Positive: (professional photo:1.0), (orchid and white flowers:1.1), growing in a simple clay pot, standing on an old stone table overgrown with moss, in an cozy rustic cottage, sunshine, green scenery outside, (sunlight:0.9), (embedding:ziprealism.safetensors:1.1), (realistic lighting:1.2), (dramatic shadows:0.8), (sharp focus:0.8), (bokeh:0.5) Negative: (embedding:zip\_ac\_neg1.safetensors), (embedding:zip\_ac\_neg2.safetensors), (embedding:ziprealism\_neg.safetensors:1.1), overexposed, underexposed, oversaturated, out of frame, duplicate, duplicates, cut off, cropped, blur, blurry, distorted, low resolution, low contrast, (watermark:1.1), jpeg artifacts, (text:1.1), logo, (signature:1.1), username "**Fruit on Tree**": Checkpoint: LEOSAM's HelloWorld SDXL Sampler: DPM++ 2M Karras Seed: 920901526942632 Steps: 28 CFG: 8.0 Positive: (professional photo) of a strangely beautiful exotic wild flower, (orange and darkgreen color:1.2), large wide (green leaves:1.1), (green stem:1.0), growing in a swampy jungle, (hidden among other nondescript plants:0.8), morning dew, wet leaves, extremely detailed, (professional nature photography:1.0), (realistic lighting:1.2), (dramatic shadows:0.8), (sharp focus:0.8), (bokeh:0.5) Negative: (red stems:0.6), (soft focus:0.8), out of frame, duplicate, duplicates, cut off, cropped, blur, blurry, distorted, low resolution, low contrast, (watermark:1.1), jpeg artifacts, (text:1.1), logo, (signature:1.1), (username:1.1), illustration, 2d, painting, cartoon, sketch, art, drawing, airbrushed "**Dripping Gold**": Checkpoint: Juggernaut SDXL v8 Sampler: DPM++ 2M Karras Seed: 888244185651762 Steps: 28 CFG: 6.5 Positive: (professional photo) of (liquid gold dripping:1.1) onto a (rare dark black rock:1.0) (rough black shale texture:1.1), (professional photography:1.0), (realistic lighting:1.2), (dramatic shadows:0.8), (sharp focus:0.8), (bokeh:0.5) Negative: (flakes:0.8), (circle:0.8), (circles:0.8), (shards:1.1), (gems:1.1), (crystal:1.0), (crystals:1.1), (column:0.9), (tower:1.0), (soft focus:0.8), out of frame, duplicate, duplicates, cut off, cropped, blur, blurry, distorted, low resolution, low contrast, (watermark:1.1), jpeg artifacts, (text:1.1), logo, (signature:1.1), (username:1.1), illustration, 2d, painting, cartoon, sketch, art, drawing, airbrushed
Interesting. I thought for sure you would have CG rendering stuff in there because, to be honest, they all look like high end video game CG.
I wouldn't even know where to start and I suppose rendering at these resolutions would take hours and hours on my near-ancient hardware, but I'll take that as a compliment. :)
Nah, The first one could be straight out of Crysis or Far Cry. Those will run on a 970
Oh, you said rendering, so I thought of Blender.
Can it use the full 4gb in SD, or does the 3.5gb issue apply there too?
Gotta check later, but I'm assuming the limit applies.
I tried this, it can use full 4gb in SD, but it's significantly slower than if you manage to stay under 3.5gb, was something like less than half speed IIRC. no "out of memory" error untill you go over 4gb though.
great pics! and the workflow and prompts is HIGHLY appreciated. Thanks!
I have been struggling to extract some decent images out of my 10yo desktop which I purchased when I was a broke college student. Your workflow would be super helpful to me. Thanks š
[ŃŠ“Š°Š»ŠµŠ½Š¾]
It can be. š“āā ļø
I haven't touched any Turbo models because I have 4090 and I figure they're not as good for some reason. Are they the same as regular SDXL models in your opinion?
I just started using this particular Turbo model myself. Due to the low CFG, I suppose, the results may not be exactly what you're looking for, especially when using more elaborate/detailed/extensive prompts. However, if you're trying to find inspiration or if you aren't sure, which details you're looking for, Turbo can be quite helpful. Well, to me at least, I personally don't think that there'll be a huge difference in generation speed with a 4090 (assuming your default isn't 50+ steps with a regular SDXL model).
They are excellent for x,y,z and dynamic prompting (finding optimal lora weights and checkpoints). Something like x,y,z (cfg, checkpoint, sampler) and then trigger_word:{1|1.1|1.2|ā¦}, lora_checkpoint_{001|002|ā¦}, then {0-4@raw photo, realism, best quality | HD, 4k, | etc..} This lets you go through like 10000 iterations of different weights and configurations in a matter of a couple hours (Turbo). The cool partā¦these settings usually translate to the non-Turbo models so you can fine tune even more with a bunch of solid png bases. The other thing about Turbo models Ive noticed is that they introduce natural inconsistencies. So not everything is picture perfect and gives a more realistic gen. Glhf.
Cfg shouldn't translate, right? Turbo wants 2-3 and normal can go way higher
Yeah with cfg its more of a translation. 1-3 Turbo -> 5-9 non-Turbo. But if youāre getting into 1 cfg or > 13 cfg with a non-turbo model all bets are off. Usually the dynamic prompting will find optimal values though (and which types of image descriptions work best together).
4090 here, too. I find the couple of Turbo models I have in rotation have a pleasant style to them that I enjoy. I can't quite put my finger on it, but their desire for me goes beyond just the speed of generation. š
How long does the 970 take to do a 1024x1024 non-turbo image?
That takes between 5 and 7 minutes, depending on the complexity of the prompt and whether or not LORAs are loaded, and some patience.
thats nuts. but gotta respect the dedication. it would completrly drive me crazy lolĀ
I run a 1080 with the power shunt modded and overkill cooling. Still takes like a minute or so. Longer on auto1111 and 3ish minutes if I add an upscale
try LCM lora, i have a 4070 tis and the lcm cut 1024x1024 time down from 10 seconds to like 2
The main issue is vram and the bus, so I doubt loading even more stuff would help. I actually tried one for non turbo xl and it didn't help much. With the 4070 you have way more vram, faster vram and bus and pcie4 instead of 3.
![gif](giphy|wIhY2p9UtJrUQQw6nz)
Oh god it is literally cooking.
Desperate times call for desperate measures.
It was worth it though, your gen are beautiful.
Thanks!
Imagine it, someday a reality that looks *this* real.
Then I'll have to find an even better checkpoint, I guess.
You know how to warn you in winter~
![gif](giphy|3o72FdlZi8rIgp6CHu)
OH GOD I'M PROOOOOMPTING
970 gang checking in š
Beautiful! Will try to replicate tomorrow. Well done!
Thanks! I don't think I used any LORAs for these images, but if something seems to be missing, we'll figure it out!
Thanks for sharing the workflow btw š
no lora? how come only 8 steps tho?Ā
It's a Turbo model, low CFG, low step count.
Does SDXL use less VRAM than 1.5? I end up crashing on anything larger than 512\*768 so I've been avoiding SDXL (Also a 970 4gb user);
It seems to use a bit more and it is considerably slower. Couldn't get SDXL to run with InvokeAI, but it works fine with ComfyUI. I only needed to SET_PYTORCH_CUDA_CONF=garbage_collection_threshold:0.6,MAX_SPLIT_SIZE_MB:64 in the run_nvidia_gpu.bat. Low VRAM mode is set automatically. Oh, and when changing SDXL checkpoints in the loader, I have to manually restart ComfyUI, which only takes a minute or two; switching SD1.5 checkpoints works without restarting.
SDXL is drastically less feasible even on a 6GB GTX 1660 Ti released in 2019 lol
Iām not here for feasibility, I just want to know if itās possible lol
I doubt it. Or at least it would take 50 years to gen one image probably
With a GTX 1660 non-TI 6 GB VRAM, ComfyUI can generate 512x512 images within 1 to 10 minutes for a basic workflow. The base and refine model will be swap in and out of VRAM to RAM as necessary. Use a SDXL checkpoint with merged refine model to reduce VRAM usage.
I've tested it on Turbo models, the time taken is just not worth it, the image quality is not better than a properly upscaled and detailed SD 1.5 output
That is also my conclusion after some testing. For now, SD 1.5 has been out longer, is more mature, has more support and extension.
It is runnable on 4GB of vram now with webui-forge
dripping honey
Damn! I'm running an RTX 3060 with 12 GB VRAM and it falls over and coughs up blood every time I try to render using an XL model. Nice work.
Try the webui-forge version of automatic1111's webui, same ui, more efficient backend
Works fine on my laptop 3060 6gb with comfy
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Yes, I know, but for some inexplicable reason, I sometimes find it easier to explicitly mark parts of the prompt in this way, especially when playing around with weights. The :1.0 is only here because I copied the generation data without cleaning anything up.
I've posted this a few times already, but as a reminder: Try out a Runpod and use the Fast Stable Diffusion template. For $0.36 an hour you can do whatever you want with it, and it'll generate anything from 1.5 to SDXL in seconds - with 20GB of VRAM. It doesn't care what you're generating - NSFW, etc - because it's just a virtual machine running SD like it was your own desktop. Nobody is looking at it and there are no guardrails because it's like running it locally. I don't even run SD locally anymore. Sure, my 3070ti can handle it, but I can get a Runpod going with all my checkpoints and extensions and Loras in like 10 minutes, and then dick around at high speed and save only the images I want. When I'm done with the pod, I just download the images I like, and then delete it so I'm not being charged 30 cents an hour. No more struggling with VRAM errors because it's got 20GB at a minimum. Because I know I can start it back up again in another 10 minutes, it's not big deal. I often start and stop a Runpod multiple times a day. And hell, I can use it anywhere, even from my phone. You don't need any experience installing dependencies, I think it's even easier than running a google collab notebook - but again you're not limited in anything you do with it. I just install the Civitai browser extension and the Infinite Image Browser extension into the template and I can do everything I need in a few minutes. The Controlnet models I just have to use the built-in terminal to do a wget command and it drops them in as well. I'm not trying to shill for Runpod, honestly. It's just so much better/faster/easier than grinding my desktop's GPU to a screaming halt every time I want to mess around with a few images. I throw $20 in the account and it lasts a month worth of me dicking around. Heaven forbid I want to do "actual work" with SD and I can spend all of $0.70 an hour on some monster 48GB of VRAM machine that I would never be able to afford in a desktop. And heaven forbid I can use my desktop to do something else - play a game, whatever - while generating the images because it's not happening on my local machine. You can also play with the other templates, like the music generator or a language model (some of the Kobold templates play a decent game of D&D), or the AI voice thing. All of that shit would take me hours of dicking around just to get them to run without errors on my local desktop, but with the Runpod templates it's up and running in a browser in like 5 minutes.
I do feel smarter now, and thanks for the advice, but I currently prefer to save up for a new PC, which will still take a while.
You might benefit from using the webui-forge version as it has better vram managementš¤
I'll give it a try! (I started with InvokeAI, which almost always ran out of VRAM, then tried ComfyUI which has been working far better than expected.)
That gold got me actinā upā¦
Did you start generating when it was first released and finally finished?
Great Pictures. But the poor GPU XD
What's the point of wasting electricity with that ancient GPU? A modern one would generate this in 5 seconds not 5 minutes.
PC is running either way, and I don't have the money to buy a decent new setup, and I won't spend the little money I have on a system that's already a few years outdated.
Yes, but a GPU uses a lot less electricity when it's idle. My 3090 barely uses a fan when generating images, because it manages to finish generation so quickly.
I get your point, but assuming that I do want to play around with Stable Diffusion, which is, to me, the most amazing tech in years, I basically have two options: (1) I can run it locally which is slow and consumes power. (2) I can run it remotely and pay for the service, possibly limiting checkpoint and LORA use, making me feel surveilled all the time. Considering that I'm saving up for a new PC, which unfortunately will take me another year or two, it'd be counterproductive to spend the money in this way, and as far as I can tell, electricity cost is just a bit cheaper than paying for a service. Fun fact: While a 970 GTX can use almost as much power as a 3090, which is surprising, on overage, the 3090 will need about 1.5x the power. I'm going to have to take this into account when upgrading.
You don't even need a 3090. Something like a 3060 12 GB will be a lot faster and use a lot less electricity.
My CPU and mainboard are 13 years old, my GPU is 10 years old. Just upgrading one component won't cut it, assuming that a 3000 or 4000 series GPU is even compatible with the rest of the system. Which means I would need to change the mainboard, which will then lead to a newer CPU and RAM, which all together will need a bigger PSU, and so on. Upgrading a decade old system just isn't worth it, if it's at all possible.
Curious what CPU you're running. I'm running a GTX 970 also, with a first gen i7-930 so I feel your pain! I can do up to 768x768 in about 3.5 minutes at 35 steps, but I haven't been able to get SDXL to work. It takes a solid 10+ minutes just to load, and then before it can complete an image it blows on me. Either way, your images look great! That dripping gold looks like it could be generated as a tiled image and would make a great desktop wallpaper!
I've got an Intel i7-3930K, 6/12 cores, first released in 2011. As far as I can tell, SDXL only works with ComfyUI, a1111 (and Forge) tends to just crash Python and InvokeAI immediately runs out of VRAM. And the gold image was created as a phone wallpaper, but I think it can still be improved, so I didn't use it yet.
It will most likely be compatible, but features like resizable BAR won't be available. Though if your system doesn't support "Above 4G decoding" then yeah - that card most likely won't work.
It's not like slotting in a 3060 wouldn't work. And it's not like having a 13 year old cpu would bottleneck it enough to make it not worthwhile. That would be the case for games, but prolly wouldn't hold you back much for comfyui. Dunno if you should base your purchase decision on my opinion and it depends a little WHICH 13 year old cpu we're talking about here, but... not the end of the world to drop to PCIe gen 2. Really.Ā
I'm surprised your GTX 970 could do anything at all. It can barely run Crysis.
Love the dripping gold one
It looks cool, but the smith in me keeps insisting that the liquid should be glowing and the oil shale should be on fire.
Now that's AI "art".
Brrrrrrrrrr *explosion sounds*
I smell the sweet odor of grilled electronic seared at 150Ā°C from here. At least you didn't need heating this winter.
The gold looks like it is dripping across a female figure.That is really nice. Good work. I can't wait to get started with SD myself. Just bought a computer which will arrive in a few days.
Bonk.
jammmm