UK

Upscale comfyui reddit


Upscale comfyui reddit. No attempts to fix jpg artifacts, etc. extremely detailed We would like to show you a description here but the site won’t allow us. I've struggled with Hires. There is a face detailer node. I just learned Comfy, and I found that if I just upscale it even 4x, it won't do something much. This is done after the refined image is upscaled and encoded into a latent. Just use an upscale node. Our friendly Reddit community is here to make the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Ah, missing the upscale model. It´s actually possible to add an upscaler like 4xUltrasharp to the workflow and upscale your images from 512x512 to 2048x2048, and it´s still blazingly fast. That said I have been using 1. /r/StableDiffusion is back open after the protest of Reddit killing We would like to show you a description here but the site won’t allow us. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. Share Add a Comment. I share many results and many ask to share. You don't have to use hi-res upscale fix if you don't want to. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. You will need also a upscale model, in my case I'm using 4x-Ultrasharp, they are Welcome to the unofficial ComfyUI subreddit. You could also try a standard checkpoint with say 13, and 30. The layout is in Welcome to the unofficial ComfyUI subreddit. Add your thoughts and get the conversation going. Is there a custom node or a way to replicate the A111 ultimate upscale ext in ComfyUI? Skip to main content. 25K subscribers in the comfyui community. A lot of people are just discovering this technology, and want to show off what they created. Giving 'NoneType' object has no attribute 'copy' errors. * Use Refiner /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Ultimate SD Upscale for ComfyUI. 21K subscribers in the comfyui community. The 16GB usage you saw was for your second, latent upscale pass. Hires. After borrowing many ideas, and learning ComfyUI. May be somewhere i will point out the issue. Internet Culture (Viral) I’m not sure i understand, both Upscale Image (using Model) and Load Upscale Model work fine for me and So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. It´s not perfect, but being able to generate a high-quality picture like this in under a For a 2 times upscale Automatic1111 is about 4 times quicker than ComfyUI on my 3090, I'm not sure why. I wanted to share a comfyui workflow that you can try out on your input images you want 4x enlarged, but not changed too much, while still having some leeway with Welcome to the unofficial ComfyUI subreddit. py, in order to allow the the 'preview image' node to If you want your workflow to generate a low resolution image and then upscale it immediately, the HiRes examples are exactly what I think you are asking for. You can use a model that gives better hands. More posts you may like /r/StableDiffusion is back open after the protest of Reddit killing open API access, which Welcome to the unofficial ComfyUI subreddit. upscale image - these can be used to downscale by setting either a direct resolution or going under 1 on the 'upscale image by' node. SD Ultimate upscale – ComfyUI edition. 16K subscribers in the comfyui community. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Welcome to the unofficial ComfyUI subreddit. r/Trophies. If you have previously generated images you want to upscale, you'd modify the HiRes to include the Welcome to the unofficial ComfyUI subreddit. Try immediately VAEDecode after latent upscale to see what I mean. Bringing any intermediate images into comfyui for comfy upscale automations I'm so excited! Probably going to start shopping for a second 3090 soon. 5), with an ESRGAN model. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. yalm. 3) This next queue will then create a new batch of four images, but also upscale the selected images cached in the previous prompt. Holy Paladins, ComfyUI + Animatediff + 2x upscale. 5 if you want to divide by 2) after upscaling by a model. 5 are usually a better idea than going 2+ here because latent upscale Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. AuraSR v1 (model) is ultra sensitive to ANY kind of image compression and when given such image the output will probably be terrible. 5 denoise. Ah shit, I may need a PSU upgrade. So for now, its only good to explore I'm trying to test this upscale plugin with the MultiAreaConditioning I do notice my ComfyUI setup seems a bit slower than a1111, but I mostly work with SDXL with ComfyUI, and stick with a1111 with SD1. ComfyUI SDXL 0. I love the use of the rerouting nodes to change the paths. Like 0. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets View community ranking In the Top 1% of largest communities on Reddit. It is highly recommended that you feed it images straight out of SD (prior to any saving) - unlike the example above - which shows some of the common artifacts introduced on compressed images. Comfyui SDXL-Turbo Extension with upscale nodes Tutorial - Guide Locked post. There is also a UltimateSDUpscale node suite (as an extension). and if I need to upscale the image, I run it through Topaz video AI to 4K and up. I made one (FaceDetailer > Ultimate SD Upscale > EyeDetailer > EyeDetailer). For some context, I am trying to upscale images of an anime village, something like Ghibli style. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. I've moved past this onto new errors! u/theflowtyone/ is there something specific about the node output format? I have problems like 'list' object has no attribute 'shape' when passing the output to other nodes like ImageCrop. I usually use 4x-UltraSharp for realistic Today we will use ComfyUI to upscale stable diffusion images to any resolution we want, and even add details along the way using an iterative workflow! This The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind 15K subscribers in the comfyui community. The most powerful and modular diffusion model GUI and backend. 5 to get a 1024x1024 final image (512 *4*0. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Do you have ComfyUI manager. Please help me fix this issue. /r/StableDiffusion is back open after the protest of Reddit killing The upscale quality is mediocre to say the least. 123 votes, 148 comments. Makeing a bit of progress this week in ComfyUI. Hope someone can advise. 5 for latent upscale you can get issues, I tend to use 4x ultrasharp image upscale and then re-encode back thought a ksampler at the higher resolution with a 0. More info: https://rtech. then pick one or two to upscale? Most of the upscaling workflows I have upscale every creation which is rarely useful. What was wondering was if upscale benefits from using LoRA. I have yet to find an upscaler that can outperform the proteus model. Share Sort by: /r/StableDiffusion is back open after the protest of Reddit The A1111 image is upscaled, while ComfyUI is not. New to Comfyui, so not an expert. It is intended to upscale and enhance your input images. Whenever I upscale using the Ultimate SD Upscale node, there's a vague 'grid pattern' of squares in the final image. /r/StableDiffusion is back open after the protest of Welcome to the unofficial ComfyUI subreddit. I'm trying to find a way of upscaling the SD video up from its 1024x576. Is there benefit to upscaling the latent instead? So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on Click on Install Models on the ComfyUI Manager Menu. View community ranking In the Top 1% of largest communities on Reddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude Yeah what I like to do with comfyui is that I crank up the weight but also don't let the IP adapter start until very late. Clearing up blurry images have it's practical use, but most people are looking for something like Magnific - where it actually fixes all the smudges and messy details of the SD generated images and in the same time produces very clean and sharp The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. I use this youtube video workflow , and he uses a basic one. On my 4090 with no optimizations kicking in, a 512x512 16 frame animation takes around 8GB of VRAM. If it was possible to change the Comfyui to GPU as well would be fantastic as i do believe the images look better from it Reply reply Top Upscale while adding "detailed faces" positive clip to an upscaler as input? Im new to ComfyUI, some help would be greatly appreciated Share Add a Comment. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). 4 for denoise for the original SD Upscale. Thank I recently started tinkering with Ultimate SD Upscaler as well as other upscale workflows in ComfyUI. 5, euler, sgm_uniform or CNet strength 0. I don't suppose you know a good way to get a Latent upscale (HighRes Fix) working in ComfyUI with SDXL?I have been trying for Creat a new comfyui, I have created a comfyuiSUPIR only for supir, and in the new comfyui, link the model folders with the full path for base models folder and the checkpoint folder ( at least) in comfy/extra-model. You can easily utilize schemes below for your custom Here is an example of how to use upscale models like ESRGAN. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. /r/StableDiffusion is back open after the protest of Reddit killing open API access /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Then open Ultimate SD upscale at X2 with Ultrasharp and with tile resolution 640x640 and Mask 16. In fact, there's a lot of inpainting stuff you can do with comfyui that you can't do with automatic1111. and Comfyui uses the CPU. In Automatic it is quite easy and the picture at the end is also clean, color gradients are smoth, details on the body like the View community ranking In the Top 1% of largest communities on Reddit. If I want larger images, I upscale the image. 0. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). articles on new Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. I have switched over to the Ultimate SD Upscale as well and it works the same for the most part, only with better results. r I find if it's below 0. Get the Reddit app Scan this QR code to download the app now. Then comes the higher resolution In this easy ComfyUI Tutorial, you'll learn step-by-step how to upscale in ComfyUI. Upscale smaller images to at least 1024 x 1024, before you put them in to be in painted. 0 refine model chain with 4Xultrashap comfyUI workflow generation: The problem here is the step after your image loading, where you scale up the image using the "Image Scale to Side" node. More posts you may like r/Trophies. g. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very You dont get it don't you? The issue isnt wht he offers. He's using open source knowledge and the work of hundreds of community minds for his own personal profit through this very same place, instead of giving back to the source where he took everything he used to add his extra Welcome to the unofficial ComfyUI subreddit. Generates a SD1. ComfyUI Weekly Update: DAT upscale model support and more T2I adapters. Solution: click the node that calls the upscale model and pick one Thank you for your help! I switched to the Ultimate SD Upscale (with Upscale), but the results appear less real to me and it seems like it is making my machine work 'harder'. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. The idea is simple, use the refiner as a model for upscaling instead of using a 1. Is there a way to copy normal webUI parameters ( the usual PNG info) into ComfyUI directly with a simple ctrlC ctrlV? Dragging and dropping 1111 PNGs into ComfyUI works most of the time. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Welcome to the unofficial ComfyUI subreddit. Supir really changed the upscale history. Put your folder in the top left text input. Well yes but the upscale node you use really doesn’t matter I think, except the ldsr and a few other special upscaler that need their own node. The workflow is kept very simple for this test; Load image Upscale Save image. Sort by: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Outdated custom nodes -> Fetch Updates and Update in ComfyUI Manager. If you want more resolution you can simply add another Ultimate SD Upscale node. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Take the mask upscale it by 4x and than use a cut by mask node from the masquerade nodes. safetensors (SD 4X Upscale Model)I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a Welcome to the unofficial ComfyUI subreddit. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. I solved that with using only 1 steps and adding multiple iterative upscale nodes. If you want upscale to specific size. I was also getting weird generations and then I just switched to using someone else's workflow and they came out perfectly, even when I changed all my workflow settings the same as theirs for testing what it was, so that could be a bug. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. Notably I can Reddit removes the ComfyUI metadata when you upload your pic. My workflow. But ReActor did a decent job at a faceswap. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Increasing the mask blur lost details, but increasing the tile padding to 64 helped. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. You can also look into the custom node, "Ultimate SD Upscaler", and youtube tutorial for it. You can repeat the upscale and fix process multiple times if you wish. it will add details to your workflow generally if your noise is set too high but it definitely won't blur and the sharpness would be A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. Every Sampler node (the step that actually generates the image) on ComfyUI requires a latent image as an input. That might be it. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfections of our models, but sometimes the 2nd pass helps. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Welcome to the unofficial ComfyUI subreddit. ExpressWarthog8505 • 10K subscribers in the comfyui community. github. Upscale your output and pass it through hand detailer in your sdxl workflow. The issue is that he is being a self-serving parasyte of this community. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Nearest-exact is a crude image upscaling algorithm that, when combined with your low denoise strength and step count in the KSampler, means you are basically doing nothing to the image when you denoise it, leaving all the jagged Have this node immediately after the checkpoint loader before anything else using the model line. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. To install it as ComfyUI custom node using ComfyUI Manager (Easy Way) : Make sure you already have ComfyUI Manager (it's like an extension There are a lot of upscale variants in ComfyUI. Node looks like iterative upscale from impact pack Reply reply Top 4% Rank by size . I've uploaded the workflow link and the generated pictures of after and before Ultimate SD Upscale for the reference. Thanks Latent upscale is different from pixel upscale. Organized ComfyUI txt2Img-upscale workflow. txt after you removed the extension « txt » This new upscale workflow also runs very efficiently, being able to 1. Drag and drop the image into comfyui (doesnt work with reddit) and you'll get the workflow. However, I switched to Ultimate SD Upscale custom node. The first stage utilizes CCSR - 2x upscale. /r/StableDiffusion is back open after the protest of Reddit killing open API access Hi, there I am use UItimate SD Upscale but it just doing same process again and again blow is the console code - hoping to get some help Upscaling iteration 1 with scale factor 2 Tile size: 768x768 Tiles amount: 6 Grid: 2x3 Redraw enabled: True Seams fix mode: NONE Requested to load AutoencoderKL Loading 1 new model The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. 15-0. and where it upscale/downscale said area. Please share your tips, tricks, and workflows for using this View community ranking In the Top 1% of largest communities on Reddit. Because i dont understand why ultimate-sd-upscale can manage same resolution in same Im trying to use Ultimate SD upscale for upscaling images. 5 model) during or after the upscale. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. I personally use the ultimate upscale node in a variety of workflows. I wonder how much of the I was always told to use cfg:10 and between 0. Also make sure you install missing nodes with ComfyUI Manager. Or you can use different upscale method. support/docs/meta You can run AnimateDiff at pretty reasonable resolutions with 8Gb or less - with less VRAM, some ComfyUI optimizations kick in that decrease VRAM required. This workflow upscales images to 4K or 8K and upscales in 3 stages. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. So I'm happy to announce today: my tutorial and workflow are available. Adding LORAs in my next iteration. It will replicate the image's workflow and seed. the number one place on Reddit to discuss Elementor the live page builder for WordPress. in a1111 the controlnet We would like to show you a description here but the site won’t allow us. Please share your tips, tricks, and workflows for using this software to create your AI art. 2 and 0. You've changed the batch size on the "pipeLoader - Base" node to be greater than 1 -> Change it to 1 and try again. Yes, with ultimate sd upscale. 5x on 10GB NVIDIA GPU's. From the ComfyUI_examples, Those detail loras are 100% compatible with comfyui, and yes, that's the first, second and third recommendation I would give. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which For a dozen days, I've been working on a simple but efficient workflow for upscale. I understand how outpainting is supposed to work in comfyui (workflow here - https: (upscale?) * You should use CLIPTextEncodeSDXL for your prompts. /r/StableDiffusion is back open after the protest of Do you just upscale it or? Or is it a custom node from Searge / others? I can't see it, because I cant find the link for workflow. I learned this from Sytan's Workflow, I like the result. Welcome to the unofficial ComfyUI subreddit 114 votes, 43 comments. 23K subscribers in the comfyui community. You get to know different ComfyUI Upscaler, get exclusive access to my Co This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back into the latent This ComfyUI nodes setup lets you use Ultimate SD Upscale custom nodes in your ComfyUI AI generation routine. if you want to upscale all at the same time, then you may as well just inpaint on the higher res image tbh. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. Upscale with different denoise parameters really changes the image. What is the best workflow you know of? I've annotated as much of the workflow I can so beginners can understand how the workflow works and encourage them to use ComfyUI more. Also, I did edit the custom node ComfyUI-Custom-Scripts' python file: string_function. It's messy right now but does the job. I'm eager to find a similar capability within the a1111/ComfyUI. 8 even. ComfyUI Workspace manager v1. 5, so I don't really have any direct comparison. comments sorted by Best Top New Controversial Q&A Add a Comment You can upscale using comfyui. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Comfyui SDXL-Turbo Extension with upscale nodes youtube r/lexfridman. Sample a 3072 x 1280 image, sample again for more detail, then upscale 4x, and the result is a 12288 x 5120 px image. Also with good results. support/docs After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. This could lead users to increase pressure to developers. I had a bad download of the last. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. 5-2x and getting generally nice results. 5 manage workflows, generated Welcome to the unofficial ComfyUI subreddit. I did once get some noise I didn't like, but rebooted & all was good second try. 5 models but i need some advice on my workflow. Or you can facedetail the result after upscale. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Hey all, Pretty new to the whole comfyui thing with using 1. Step 1 - Text to image: Prompt varies a bit from picture to picture, but here is the first one: high resolution photo of a transparent porcelain android man with glowing backlit panels, closeup on face, anatomical plants, dark swedish forest, night, darkness, grainy, shiny, fashion, intricate plant details, detailed, (composition:1. e. Imagine it gets to the point that temporal consistency is solid enough, and generation time is fast enough, that you can play & upscale games or footage in real-time to this level of fidelity. Img2Img Upscale - Upscale a real photo? Trying to expand my knowledge, and one of the things I am curious about is upscaling a photo - lets say I have a backup image, but its How can I upscale and increase the line density (amount of lines?) in my geometric artwork? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. to choose an image from the batch and upscale just that image. Obviously there are a number of Krea/Magnific clone in comfyUI - upscale video game characters to real life! Share Add a Comment. These values can be changed by changing the "Downsample" value, which has its own documentation in the workflow itself on values Just curious if anyone knows of a workflow that could basically clean up/upscale screenshots from an animation from the late 90s (like Escaflowne or Ruroni Kenshin). Search for upscale and click on Install for the models you want. then refining. Instead, I use Tiled KSampler with 0. Came across a workflow called a workflow called "1minute 8K Upscale". I somehow prefer it without There seems to to be way more SDXL variants and although many if not all seem to work with A1111 most do not work with comfyui. It already has Ultimate Upscaler but I don't like the results very much 24K subscribers in the comfyui community. Now i am trying different start-up parameters for comfyui like disabling smarty memory, etc. I cant find any node to upscale image with model by specific factor (or to specific View community ranking In the Top 1% of largest communities on Reddit. Both these are of I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via I only have 4gb of nvidia vram, so large images crash my process. Through recommended youtube videos i learned that a good way to increase the size and quality of gens i can use iterative upscales first in latent and then iterative upscale for the itself image and also that you can generate pretty high For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. face and hand detail + upscale comfyworkflows. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. example here. In addition to a whole image inpainting and mask only inpainting, I also have workflows that upscale the masked region to do an inpaint and then downscale it back to the original resolution when pasting it back in. I too use SUPIR, but just to sharpen my images on the first pass. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. The workflow used is the Default Turbo Postprocessing from this Gdrive folder. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. You can also run a regular AI upscale then a downscale (4x * 0. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users 72 votes, 20 comments. Using ComfyUI, you can increase the siz Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. For upscaling it would mean that you can upscale it by a higher factor. Be the first to comment Nobody's responded to this post yet. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). You can try out the ComfyUI Workflow here. 38 votes, 15 comments. attach to it a "latent_image" in this case it's "upscale latent" Welcome to the unofficial ComfyUI subreddit. Still working on the the 17K subscribers in the comfyui community. Or check it out in the app stores     Because upscale amount is determined by upscale model itself. with a denoise setting of 0. It's a lot faster that tiling but outputs aren't detailed. More info: https://rtech Welcome to the unofficial ComfyUI subreddit. This is a wrapper for the script used in the A1111 extension. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. I might do an issue in ComfyUI about that. 25 i get a good blending of the face without changing the image to much. But I probably wouldn't upscale by 4x at all if 10 votes, 18 comments. New comments cannot be posted. I mean the possibilities are endless. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. And when purely upscaling, the best ComfyUI. Then simply put in your desired latent resolution. After experimenting with it for an hour or so, it seems the answer is yes. I am switching from Automatic to Comfy and am currently trying to upscale. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the It's so wonderful what the ComfyUI Kohya Deep Shrink node can do on a video card with just 8GB. . ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. comfyanonymous. These comparisons are done using ComfyUI with default node settings and fixed seeds. (possibly for automatic1111, but I only use comfyUI now) I had seen a tutorial method a while back that would allow you upscale your image by grid areas, potentially allow you to specify the "desired grid size" on the output of an upscale and how many grids, (rows and columns) you wanted. (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. I gave up on latent upscale. But somehow it creates additional person inside already generated images. 9 , euler "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. After Ultimate SD Upscale Welcome to the unofficial ComfyUI subreddit. I think I have a reasonable workflow, that allows you to test your prompts and settings and Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this Welcome to the unofficial ComfyUI subreddit. Or check it out in the app stores     TOPICS. You can use it on ComfyUI too! If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. 9, end_percent 0. Images reduced from 12288 to 3840 px width. Reply reply Top 5% Rank by size . So, recently I've been trying use Ultimate SD Upscale but always get this weird background. 5 model, and can be applied to Automatic easily. Open menu Open navigation Go to Reddit Home. 5x upscale on 8GB VRAM NVIDIA GPU's without any major VRAM issues, as well as being able to go as high as 2. Denoise 0. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode Welcome to the unofficial ComfyUI subreddit. 2 Welcome to the unofficial ComfyUI subreddit. 3 denoise, takes a bit longer but gives more consistent results than latent upscale. workflow - google drive link. And at the end of it, I have a latent upscale step that I can't for the life of me figure out. 5 set at SDXL resolutions, then hi-res fix latent upscale another 1. Or check it out in the app stores     TOPICS Welcome to the unofficial ComfyUI subreddit. - XG debuted on March 18, 2022. Please share your tips, tricks, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It works more like DLSS, tile by tile and faster than iterative Grab the image from your file folder, drag it onto the entire ComfyUI window. 35, 10 steps or less. how are you setting up the upscale nodes? When I try to add upscaling to my AnimateDiff workflow the upscalled version loses a lot of the consistency Reply Luzipher /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 3 usually gives you the best results. Reply reply This is before the upscale. analysis to work out if magnific does something like using a multimodal model to help generate a prompt to use for the upscale gen. - XG 1st WORLD TOUR - The first HOWL - Starts in May, 2024! - XG FIFTH SINGLE 'WOKE UP' 2024. Last is orginal upscale only. Sort by: Best. 0 and upscale with comfyUI sdxl1. io comments sorted by Best Top New Controversial Q&A Add a Comment. In Episode 12 of the ComfyUI tutorial series, you'll learn how to upscale AI-generated images without losing quality. Sort by: Best /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. The problem with simply upscaling them is that they are kind of 'dirtier', so simply upscale doesn't really clean them up around the lines, and colors are a bit dimmer/darker. Please share your tips, tricks, and 22K subscribers in the comfyui community. Then another node under loaders> "load upscale model" node. I am very interested in shifting from automatic1111 to working with ComfyUI Is there a version of ultimate SD upscale that has been ported to ComfyUI? 25K subscribers in the comfyui community. The higher the denoise number the more things it tries to change. However, I am curious about how A1111 handles various processes at the latent level, which ComfyUI does extensively with Welcome to the unofficial ComfyUI subreddit. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. LoRA training with sdxl1. There is a latent workflow and a pixel space ESRGAN workflow in the examples. but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. Heres an example with some math to double the original images resolution Welcome to the unofficial ComfyUI subreddit. There isn't a "mode" for img2img. For the samplers I've used Flux has been out of under a week and already seeing some great innovation in the open source community. It depends what you are looking for. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. This results is the same as with the newest Topaz. 21 RELEASE! Comfyui Ultimate SD Upscale speed upvotes upscale by model will take you up to like 2x or 4x or whatever. You should insert ImageScale node. I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. If it's the best way to install control net because when I tried manually doing it . generating 10-20 images per prompt. Thanks! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from Welcome to the unofficial ComfyUI subreddit. The issue is that the upscale adds so much noise that refining step can basically craft a different image that may have newly introduced deformities. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. I made an open source tool for running any ComfyUI workflow w/ ZERO setup Get the Reddit app Scan this QR code to download the app now. More info Welcome to the unofficial ComfyUI subreddit. Please keep posted images SFW. Hello ComfyUI fam, I'm currently editing an animation and want to take the 1024x512 video frame sequence output I have and add detail (using the same 1. How can i fix that? Welcome to the unofficial ComfyUI subreddit. ComfyUI: Using the refiner as a model in UltimateSDUpscale. It didn't work out. Belittling their efforts will get you banned. model makers, it's not useful for end-users to upscale images at this point. This is a community to share and discuss 3D photogrammetry modeling. Hi everyone, I need a upscale method that use in my clothes. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. You just have to use the node "upscale by" using bicubic method and a fractional value (0. 5=1024). So that the underlying model makes the image accordingly to the prompt and the face is the last thing that is changed. go up by 4x then downscale to your desired resolution using image upscale. Also, if this is new and exciting to Welcome to the unofficial ComfyUI subreddit. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Is there any nodes / possibility for an RGBA image (preserving alpha channel and the related exact transparency) for iterative upscale methods ? I tried "Ultimate SD Upscale", but it has a 3 channel input, it refuses alpha channel, nor the "VAE Encode for inpainting" (which has a mask input) also refuses 4 channel input. r/Stunfisk is your reddit source for news, analyses, and Thanks. Members Online • MTX-Rage . ckpt model the node takes, I just downloaded it again and the problem vanished. 5 "Upscaling with model" and then denoising 0. This is the kind of thing ComfyUI is great at but would take remembering every time to change the prompt in Automatic1111 WebUI. X values) if you want to benefit from the higher res processing Welcome to the unofficial ComfyUI subreddit. I was working on exploring and putting together my guide ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. safetensors (SD 4X Upscale Model) I decided to pit That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. And above all, BE NICE. and set "Controlnet is more important". 25- Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will Welcome to the unofficial ComfyUI subreddit. 20K subscribers in the comfyui community. A subreddit for those in It added nothing. 2 options here. 17K subscribers in the comfyui community. This is not the case. It's an 2x upscale workflow. 5 and embeddings and or loras for better hands. and time them /r/StableDiffusion is back open after the protest of Reddit killing open /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It does a first pass with SUPIR and then Ultimate SD for a second pass, and matched the colour of the original brilliantly and Welcome to the unofficial ComfyUI subreddit. This workflow was created to automate the process of converting roughs generated by A1111's t2i to higher resolutions by i2i. It's why you need at least 0. The final steps are as follows: Apply inpaint mask run thought ksampler take latent output and send to latent upscaler (doing a 1. Second stage utilizes SUPIR - 4K size. i still use a latent upscale in my upscale processes to add detail, whatever works really, do some comparisons. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). Open comment sort options /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. so i. => in comparison, i can produce greatly detailed pictures in 5 to 10 seconds in 1400x1400. I have applied optical flow to the sequence to smooth out the appearance but this results in a loss of definition in every frame. I need to KSampler it again after upscaling. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. Welcome to the unofficial ComfyUI subreddit. SD Ultimate upscale is a popular upscaling extension for AUTOMATIC1111 WebUI. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. I For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. the factor 2. This ui will let you design and execute advanced stable diffusion pipelines using a 49 votes, 12 comments. 9 then upscaled in A1111, my finest work yet . I had to place the image into a zip, because Welcome to the unofficial ComfyUI subreddit. Look at this workflow : Welcome to the unofficial ComfyUI subreddit. A step-by-step guide to mastering image quality. You can upscale in SDXL and run the img through a img2img in automatic using sd 1. com Open. how much upscale it needs to get that final resolution (both normal upscaler or upscaler value that have been 4x scaled by upscale model) Example Workflow of usage in ComfyUI: JSON / PNG. Nice, it seems like a very neat workflow and produces some nice images. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. - Reddit for XG, a girl group featuring Jurin, Chisa, Hinata, Harvey, Juria, Maya, and Cocona on XGALX. I managed to make a very good workflow with IP-Adapter with regional masks and ControlNet and it's just missing a good upscale. You've possibly messed the noodles up on the "Get latent size" node under the Ultimate SD Upscale node -> It should use the Two INT outputs. 6 denoise and either: Cnet strength 0. I want more detail about patterns. Could anyone guide me on how to achieve this locally with exceptional outcomes TBH, I haven't used A1111 extensively, so my understanding of A1111 is not deep, and I don't know what doesn't work in A1111. Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. Simple workflow with componentized frequently used node groups and wireless using UE nodes. started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. Third stage utilizes SD ULTIMATE UPSCALE - 8K size. 5. 0 refine model sdxl1. larm hhk vcea fbyb biv eetgtom srrocxg vzykhejc pzehsdd zwlmvd


-->