• Lang English
  • Lang French
  • Lang German
  • Lang Italian
  • Lang Spanish
  • Lang Arabic


PK1 in black
PK1 in red
PK1 in stainless steel
PK1 in black
PK1 in red
PK1 in stainless steel
Comfyui upscale models reddit

Comfyui upscale models reddit

Comfyui upscale models reddit. Upscaling: Increasing the resolution and sharpness at the same time. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. SD upscaler and upscale from that. Though, from what someone else stated it comes to use case. Reply reply I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's garbage collection because it's basically a janky wrapper for an Auto1111 extension. In resting if found that you CANNOT pass latent data from SD1. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. It's been trained to make any model produce higher quality images at very low steps like 4 or 5. Look at this workflow : I get good results using stepped upscalers, ultimateSD upscaler and stuff. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. DirectML (AMD Cards on Windows) pip install torch-directml Then you can launch ComfyUI with: python main. Also, both have a denoise value that drastically changes the result. This. - latent upscale looks much more detailed, but gets rid of the detail of the original image. The downside is that it takes a very long time. So, vae decode to image, then vae encode to latent using the next model you're going to process with. We would like to show you a description here but the site won’t allow us. Edit: you could try the workflow to see it for yourself. FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. Note: Remember to add your models, VAE, LoRAs etc. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. These upscale models always upscale at a fixed ratio. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. IMAGE. . If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. I rarely use upscale by model on its own because of the odd artifacts you can get. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. 5 to get a 1024x1024 final image (512 *4*0. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. Please share your tips, tricks, and workflows for using this software to create your AI art. in a1111 the controlnet Welcome to the unofficial ComfyUI subreddit. Here is an example of how to use upscale models like ESRGAN. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. I love to go with an SDXL model for the initial image and with a good 1. Sometimes models appear twice, for example “4xESRGAN” used by chaiNNer and “4x_ESRGAN” used by Automatic1111. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. 15K subscribers in the comfyui community. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Additionally, the animatediff_models and clip_vision folders are placed in M:\AI_Tools\StabilityMatrix-win-x64\Data\Packages\ComfyUI\models. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P I've been using Stability Matrix and also installed ComfyUI portable. I'm using mm_sd_v15_v2. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. Upscale x1. ckpt motion with Kosinkadink Evolved . Always wanted to integrate one myself. This new upscale workflow also runs very efficiently, being able to 1. These comparisons are done using ComfyUI with default node settings and fixed seeds. The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. Upscale Model Examples. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. py --directml Tried the llite custom nodes with lllite models and impressed. However, I'm facing an issue with sharing the model folder. - image upscale is less detailed, but more faithful to the image you upscale. The pixel images to be upscaled. You can also do latent upscales. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. This way it replicates the sd upscale/ultimate upscale scripts from A1111. 5 I'd go for Photon, RealisticVision or epiCRealism. After generating my images I usually do Hires. g Use a X2 Upscaler model. Working on larger latents, the challenge is to keep the model somehow still generating an image that is relatively coherent with the original low resolution image. This is done after the refined image is upscaled and encoded into a latent. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. We are just using Ultimate SD upscales with a few control nets and tile sizes ~1024px. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. second pic. 5x upscale on 8GB VRAM NVIDIA GPU's without any major VRAM issues, as well as being able to go as high as 2. May 5, 2024 · こんにちは、はかな鳥です。 前回、明瞭化アップスケールの方法解説として、『clarity-upscaler』のやり方を A1111版&Forge版 で行いましたが、今回はその ComfyUI版 です。 『clarity-upscaler』というのは一つの拡張機能というわけではなく、ここでは Controlnet や LoRA 等、さまざまな機能を複合して作動 Generates a SD1. There are also "face detailer" workflows for faces specifically. outputs. Search for upscale and click on Install for the models you want. So I made a upscale test workflow that uses the exact same latent input and destination size. 5x on 10GB NVIDIA GPU's. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird Here is a workflow that I use currently with Ultimate SD Upscale. inputs. I am looking for good upscaler models to be used for SDXL in ComfyUI. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. 25 i get a good blending of the face without changing the image to much. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. Thank you community! It s not necessary an inferior model, 1. For SD 1. 5 if you want to divide by 2) after upscaling by a model. The model used for upscaling. If it's the best way to install control net because when I tried manually doing it . Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. Does anyone have any suggestions, would it be better to do an ite Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. But it's weird. That's because of the model upscale. Cause I run SDXL based models from start and through 3 ultimate upscale nodes. I have been using 4x-ultrasharp for as long as I can remember, but just wondering what everyone else is using and which use case? I tried searching the subreddit but the other posts are like earlier this year or 2022 so I am looking for updated information. The restore functionality, that adds detail, doesn't work well with lightning/turbo models. One does an image upscale and the other a latent upscale. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. 15-0. I haven't been able to replicate this in Comfy. This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. Please keep posted images SFW. All the models are located in M:\AI_Tools\StabilityMatrix-win-x64\Data\Models. Makes sense when you look a bit into tensors I guess. No attempts to fix jpg artifacts, etc. this is just a simple node build off what's given and some of the newer nodes that have come out. Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. And when purely upscaling, the best upscaler is called LDSR. If you check the description on YT I have a Github repo I have set up with sample images and workflow JSON's as well as links to the LoRA's and Upscale models. You need to use the ImageScale node after if you want to downscale the image to something smaller. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). There is a face detailer node. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. safetensors (SD 4X Upscale Model) Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. I have a custom image resizer that ensures the input image matches the output dimensions. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). it's nothing spectacular but gives good consistent results without Upscale Image (using Model) node. Thanks. with a denoise setting of 0. There's "latent upscale by", but I don't want to upscale the latent image. example. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. 6. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. The upscaled images. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. I want to upscale my image with a model, and then select the final size of it. Please share your tips, tricks, and workflows for using this… A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). e. The resolution is okay, but if possible I would like to get something better. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. 5 model, since their training was done at a low resolution. The workflow is kept very simple for this test; Load image Upscale Save image. A step-by-step guide to mastering image quality. I am curious both which nodes are the best for this, and which models. Good for depth, open pose so far so good. In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. Do you have ComfyUI manager. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. 0-RC , its taking only 7. the factor 2. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting messing around with upscale by model is pointless for high res fix. I generate an image that I like then mute the first ksampler, unmute Ult. There’s only so much you can do with an SD1. 5=1024). The Upscale Image (using Model) node can be used to upscale pixel images using a model loaded with the Load Upscale Model node. upscale_model. 5 for the diffusion after scaling. Just use another model loader and select another model. 5 to sdxl or vica vera or you get a garbage result. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Thank Alright, back by popular DEMAND here is a version of my infinite skin detail workflows that works without any external tools. I have played around with it but all the low step fast models require very low cfg also so it's difficult to make them follow prompts strongly, especially when you want to go against the models natural bias. 5 it s in mature state where almost all the models and loras are based on it, so you get better quality and speed with it. But for the other stuff, super small models and good results. It uses CN tile with ult SD upscale. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. That's exactly how other UIs that let you adjust the scaling of these models do it, they downscale the image using a regular scale method after. That's because latent upscale turns the base image into noise (blur). You could also try a standard checkpoint with say 13, and 30. Solution: click the node that calls the upscale model and pick one. r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. example usage text with workflow image Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. image. Also ultimate sd upscale is also a node if you dont have enough vram it tiles the image so that you dont run out of memory. Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. These values can be changed by changing the "Downsample" value, which has its own documentation in the workflow itself on values for sizes. It didn't work out. From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. Here is an example: You can load this image in ComfyUI to get the workflow. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. Welcome to the unofficial ComfyUI subreddit. so i. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. For upscaling I mainly used the chaiNNer application with models from the Upscale Wiki Model Database but I also used the fast stable diffuison automatic1111 google colab and also the replicate website super resolution collection. The last one takes time I must admit but it run well and allow me to generate good quality images (I managed to have a seams fix settings config that works well for the last one hence the long processing) You just have to use the node "upscale by" using bicubic method and a fractional value (0. uorhld qzjl ujqxm lvyps zrrwj pwzcnm vteuu dqdehfm ygxlh lkopyx