Decorative
students walking in the quad.

Comfyui workflow png reddit free

Comfyui workflow png reddit free. Im trying to do the same as high res fix, with a model and weight below 0. A journey through seasons - Morph workflow now with 4 reference images 0:07. One of the most annoying problem I encountered with ComfyUI is that after installing a custom node, I have to poke around and guess where in the context menu the new node is located. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. I finally found a sample workflow on civitai that generates an image for me, and I swear I can't tell any functional difference between it and my workflow. For the first two methods, you can use the Checkpoint Save node to save the newly created inpainting model so that you don't have to merge it each time you switch. Also psyched this community seems to be so helpful. Nodes interface can be used to create complex workflows like one for Hires fix or much more You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow The png files produced by ComfyUI contain all the workflow info. Next I think I should separate my style and base prompts using something like Mikey's. But for a base to start at it'll work. I'm trying to build a workflow that can take an input image and vary it by a given amount. A transparent PNG in the original size with only the newly inpainted part will be generated. Has anyone else messed around with gligen much? OP probably thinks that comfyUI has the workflow included with the PNG, and it does. Please share your tips, tricks, and Building your own is the best advice there is when starting out with ComfyUI imo. K12sysadmin is open to view and closed to post. So, as long as you don't expect comfyui not to break occasionally, sure give it a go. x, SDXL, LoRA, and We all know it is possible to load a workflow or drag one in ComfyUI with a PNG image. Support for SD 1. Uhm the image is the png file that you can save and drop into comfyui to load the workflow if this is new and exciting to you, feel free to post, but don't spam all your work. Introducing ComfyUI Launcher! new. See the power of simple SVD workflow in comfyui. py file by lowering tensor normalize values from 0. Saving/Loading workflows as Json files. Product Photo relighting workflow - Start from a existing picture or generate a product, segment the subject via SAM, generate a new background, relight the picture, keep finer details AP Workflow 4. ComfyUI Plugin for Photoshop upvotes /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It's completely free and open-source but donations You can load workflows into ComfyUI by: dragging a PNG image of the workflow onto the ComfyUI window (if the PNG has been encoded with the necessary JSON) copying the Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Our AI Image Generator is completely free! Get started for free. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. Follow the plot and input your Welcome to the unofficial ComfyUI subreddit. 0 for ComfyUI (Hand Detailer, Face Detailer, Free Lunch, Image Chooser, XY Plot, I implemented the experimental Free Lunch optimization node. ai). 9 but it looks like I need to switch my upscaling method. The next chapter for ComfyUI /r/StableDiffusion is back open after the protest of Reddit killing open API access, which Welcome to the unofficial ComfyUI subreddit. 5 to 0. Open the file browser and upload your images and json files, then simply copy their links (right click -> copy Comfy Workflows. Very nice results overall but sometimes I just need to tweak some extra parameters and there are no parameters here at all! Zero config would be cool if the model worked 100% of the time which is not the case here, I managed to modify BRIA_RMBG. That being said, some users moving from a1111 to Comfy Welcome to the unofficial ComfyUI subreddit. Explore thousands of workflows created by the community. Image Realistic Composite & Refine ComfyUI Workflow . This allows you to run custom workflows for free (200 times per day) and also lets others use what you add and you can use theirs too. My question however is Can I drag an existing workflow or part of an existing The complete workflow you have used to create a image is also saved in the files metadatas. Tiens_il_pleut • it seems that this feature is only implemented for png. If you see a few red boxes, be sure to read the Questions section on the page. there are a bunch of images that can be loaded as a workflow by comfyUI, you download the png and load it. We launched a Loading full workflows (with seeds) from generated PNG, WebP and FLAC files. it is VERY memory efficient and has a great deal of flexibility especially where a user has need of a complex set of instructions they often do /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Also, if this is new and exciting to you, feel free to post, but don't spam all your work. 0 is the first step in that direction. Thanks. Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I was looking for tools that could help me set up ComfyUI workflows automatically and also let me use it as a backend, but couldn't find any. If you wanted to share the workflow via the png picture. 0 ComfyUI Tutorial - Readme File Updated With SDXL 1. While I normally dislike providing workflows because I feel its better to teach someone to catch a fish than giving them one. You can simply open that image in comfyui or simply drag and drop it Discovery, share and run thousands of ComfyUI Workflows on OpenArt. It’s not the point of this post and there’s a lot to learn, but still, let me share my personal experience with you: Where can I download images that have workflow included? I want to load an image in comfyui and the workflow appear, just as it does when I load a saved image from my own work. By default, all your workflows will be saved to `/ComfyUI/my_workflows` folder. 0 and refiner and installs ComfyUI. But now we auto backup your workflows to your disk folder, the data should be much more reliable, you can always find your backups in your disk. ComfyUI only allows stacking LoRA nodes, as far as I know. But reddit will strip it away. We've now made many of them available to run on I'm going to list all the methods I've tried. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. So OP, please upload the PNG to civitai. But being reasonable here, the majority of people who are building on stable diffusion are artists and creatives who are not primarily python/ai developers. For example, it would be very cool if one could place the node numbers on a grid (of Forgot to copy and paste my original comment in the original posting 😅 This may be well known, but I just learned about it recently. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. I had to place the image into a zip, because people have told me that Reddit strips . and yess, this is arcane as FK and I have no idea why Welcome to the unofficial ComfyUI subreddit. I've been using comfyui for a few weeks now and really like the flexibility it offers. 0 and refiner and installs ComfyUI Readme File Updated With SDXL 1. 59/hr with $1 Free Credits & ComfyManager Included - Create Comfy Workflows Like Never Before! Check It Out on https://cheapcomfyui. I’d be keen to see your workflow too. Not automatic but it's really simple as it only take 2 clicks to get the upsacle (using SDXL Ultimate workflow 2. Had a working Windows manual Comfy install suddenly break: Won't load a workflow from PNG, either through the load menu I'm facing a problem where, whenever I attempt to drag PNG/JPG files that include workflows into ComfyUI—be it examples from new plugins or unfamiliar PNG files that comyfy ui won't load workflow from json but png Whether it's through dragging or using the LOAD module to read files, nodes cannot be imported into the workspace. The images look better than most 1. png files, with the full workflow embedded, making it dead simple to reproduce the image or make new ones using the same workflow. ly/workflow2png. ) to integrate it with comfyUI for a "$0 budget sprite game". They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. Img2Img ComfyUI workflow. The checkpoint really matters, I had to try quite a few, as its opinions will show up strongly. Reference image analysis for extracting images/maps for use with ControlNet. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in Export the adjusted Z-depth as a PNG sequence IPAdapter and ControlNet: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Here is my current droppable master workflow for ComfyUI SDXL. You can save the workflow as a json file with the queue control panel "save" workflow button. This makes it potentially very convenient to share workflows with other. Hi everyone. 5 from 512x512 to 2048x2048. SDXL. You can construct an image generation workflow by chaining different blocks (called nodes) together. Otherwise, please change the flare to "Workflow not included" I'm perfecting the workflow I've named Pose Replicator. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. I tried to find either of those two examples, but I have so many damn images I couldn't find them. Most workflows you see on GitHub can also be downloaded. Some commonly used blocks From my tests, the settings you will like it's effect will vary from one situation to another (checkpoint, prompt etc), so my understanding of the 'default' values are "a setting where you will definitely see FreeU doing something", not "best setting for any possible situation". Newcomers should familiarize themselves with easier to understand workflows, as it can be Each png contains the workflows using these CropAndStitch nodes. The default SaveImage node saves generated images as . Table of contents. Suggestion welcome! u/SleepRealisticCheck6190 is right, you'll want to use IPadapters and controlnets. Ignore the prompts and setup Welcome to the unofficial ComfyUI subreddit. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. But let me know if you need help replicating some of the concepts in my process. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. com comments I've encountered an issue where, every time I try to drag PNG/JPG files that contain workflows into ComfyUI—including examples from new plugins and unfamiliar PNGs Comfy won't load workflow from PNG, only JSON. com and then post a link back here if you are willing to share it. The image itself was supposed to be the workflow png but I heard reddit is stripping the meta data from it. This optimized & annotated Stable Video Diffusion workflow created by VereVolf lets you easily do text2vid and img2vid: /r/StableDiffusion is back open after ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. but mine do include workflows for the most part in the video description. (as an export of WORKFLOW IMAGE to PNG from ComfyUI). The png files produced by ComfyUI contain all the workflow info. Please share your tips, tricks, and workflows for using this software to create your AI art. These include Stable Diffusion and other platforms like Flux, AuraFlow, PixArt, etc. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Reddit doesn't allow more than one image per comment. Members Online. x, 2. png you can drag into ComfyUI to test the nodes are working or add them to your current workflow to try them out. Is there a way to export a ComfyUI workflow to be able to run on a server? Belittling their efforts will get you banned. The folder with the CSV files is located in the "ComfyUI\custom_nodes\ComfyUI-CSV_Loader\CSV" folder to keep everything contained. This seems like an oft-asked and well documented problem. It's really nice how the node graph can be saved in a PNG file. Here, you can freely and cost-free utilize the online ComfyUI to swiftly generate and save your workflow. Only $0. (Recap) We have hosted the first ComfyUI Workflow Contest last month and got lots of high quality workflows. Merging 2 Images together. I just released version 4. 5 based models with greater detail in SDXL 0. ComfyUI is a completely different conceptual approach to generative art. The ComfyUI workflow uses the latent upscaler (nearest/exact) set to 512x912 multiplied by 2 and it takes around 120-140 seconds per image at 30 steps with SDXL 0. 44 votes, 11 comments. If you are a commercial entity and want some presets that might work for different style transformations feel free to contact me on Reddit or on my social accounts. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, AP Workflow 4. Layer copy & paste this PNG on top of I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. It is a powerful workflow that let's your imagination run wild. We've now made many of them available to run on OpenArt Cloud Run for free, where you don't need to setup the environment or install custom nodes yourself. The process of building and rebuilding my own workflows with the new things I've learned has taught me a lot. All I get is a noise pattern like the sampler isn't actually doing any denoising. Whereas a single wildcard prompt can range from 0 LoRAs to 10. Could anyone recommend the most effective way to do a quick face swap on an MP4 video? It doesn't necessarily have to be with ComfyUI; I'm open to any tools or methods that offer good quality and reasonable speed. I set up a workflow for first pass and highres pass. Welcome to the unofficial ComfyUI subreddit. magick convert -coalesce filename. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably Welcome to the unofficial ComfyUI subreddit. Then I put the seed back on "fixed" and activate the upscale module. webp output-%04d. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. It works quite well for me so far but I keep upgrading it. This RAVE workflow in combination with AnimateDiff allows you to change a main subject character into something completely different. 0 download links and new workflow PNG files - New Updated Free Tier Google Colab now auto downloads SDXL 1. Anyone have a workflow to do the following. EDIT: For example this workflow shows the use of the other prompt windows. I got better results with the reference face in semi-profile. The problem I'm having is that Reddit It uses the built-in ComfyUI API to send data back and forth between the comfyui instance and the interface. I have a custom image resizer that ensures the input image matches the output dimensions. This is what I learned: Movie Me (InstantID with Image2Image) | ComfyUI Workflow (openart. Enjoy the freedom to create without constraints. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. This video specfically addresses clothing replacement: Face + Pose + Clothing If you are getting started, check out his other videos as well, they are all pretty great. If the PNG is the original one from ComfyUI then it should contain the workflow. A lot of people are just Hi. AP Workflow 5. There is the "example_workflow. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Is this workflow at all possible in ComfyUI? Trying downloading the PNG (download it from the image, not from the post's image gallery, which is a preview in jpeg). Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07. Background remover, to facilitate the generation of the images/maps referred to in point 2. How to use this workflow Simply type "StartGame" in the input box, and the game will start from your birth. Obviously I can drag the png into comfy and get my workflow, but is there any way to discover the original seed values used? Share /r/StableDiffusion is back open after the protest of Reddit killing I run some tests this morning. 150+ ComfyUI Welcome to the unofficial ComfyUI subreddit. Note, this has nothing to do with my nodes, you can check ComfyUI's default workflow and see it yourself. The question: in ComfyUI, how do you persist your random / wildcard / generated prompt for your images so that you can understand the specifics of the true prompt that created the image?. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. Is there a common place to download these? Nome of the reddit images I find work as they all seem to be jpg or webp. SDXL 1. and the checkbox 'include workflow' must be checked in the 'save image' node. I recently discovered the existence of the Gligen nodes in Comfyui and thought I would share some of the images I made using them (more in the civitai post link). View community ranking In the Top 10% of largest communities on Reddit. 0 for ComfyUI (Hand Detailer, Face Detailer, Free Lunch, Image Chooser, XY Plot, ControlNet/Control-LoRAs, Fine-tuned SDXL models, SDXL Base+Refiner, ReVision, Upscalers, Prompt Builder, Debug, etc. For the life of me, I can't put together a workflow that actually produces an image with it. The next chapter for ComfyUI Animate your still images with this AutoCinemagraph ComfyUI workflow Reddit is removing the workflow from the PNG when you upload. io. I would like to edit the screenshot with the saved workflow in Photoshop and then save the metadata again. so I Skip to main content Open menu Open navigation Go to Reddit Home Plush-for-ComfyUI Plush contains two OpenAI enabled nodes: Style Prompt: Takes your prompt and the art style you specify and generates a prompt from ChatGPT3 or 4 that Stable Diffusion can use to generate an image in that style. With this you can Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. But, of the custom nodes I've come upon that do webp or jpg saves, none of them seem to be able to embed the full workflow. com/. Mixamo gives you a lot of "free" animations on a couple of model types that could be img2img'd to be other characters. - Ling-APE/ComfyUI-All-in-One-FluxDev Welcome to the unofficial ComfyUI subreddit. The graphic style Here is one I've been working on for using controlnet combining depth, blurred HED and a noise as a second pass, it has been coming out with some pretty nice variations of the originally generated images. Run any ComfyUI is a web-based Stable Diffusion interface optimized for workflow customization. SD 1. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. To add content, your account must be vetted/verified. /r/StableDiffusion is back open after the protest of Reddit Discovery, share and run thousands of ComfyUI Workflows on OpenArt. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ComfyUI workflow with 50 nodes and 10 models ?share with ComfyFlowApp in two After learning auto1111 for a week, I'm switching to Comfy due the rudimentary nature of extensions for everything and persisting memory issues with my 6GB GXT1660. 0 of my AP Workflow for ComfyUI Welcome to the unofficial ComfyUI subreddit. SDXL using Fooocus patch. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. OAI Dall_e 3: Takes your prompt and parameters and produces a Dall_e3 image in ComfyUI. Coming from A1111, I like its ability to save the output as lossy webm ,so I could save the gens 'that didn't make it' as 50kb webm, and only the ones worth sharing as 500kb png. Sure, my paintbrush never crashed after an update, but then comfyui doesn't get crimped in my bag, my loras don't need cleaning, and a png is quite a bit cheaper than canvas. Please keep posted images SFW. This Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. I would like to further modify the ComfyUI workflow for the aforementioned "Portal" scene, in a way that lets me use single images in ControlNet the same way that repo does (by Welcome to the unofficial ComfyUI subreddit. Hey community, Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. 1 or not. So dragging an image made with Comfy onto the UI loads the entire workflow used to make it, which is awesome, but is there a way to make it load just the Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Welcome to the unofficial ComfyUI subreddit. Grab the ComfyUI workflow JSON here. I think you missed a key part. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. SDXL Default ComfyUI workflow. Upscaling ComfyUI workflow. Not a specialist, just a knowledgeable beginner. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I couldn't decipher it either, but I think I found something that works. This workflow allows you to change clothes or objects in an existing image If you know the required style, you can work with the IP-Adapter and upload a reference image And if you want to get new ideas or directions for design, you can create a large amount of variations in a process that is mostly automatic Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, So i made a python executable file using comfyui-to-python extension but the file I got made all image processing happen but it never gave me any output image, can anyone please look at it, it is fairly simple but I am messing it up somehow, You can control whether a node is run or not by using fixed seeds. comments Welcome to the unofficial ComfyUI subreddit. Our AI Image Generator is completely free! This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. Hi again, i created simple node for LLM (I made this for T5 model but it should work with gpt2 type model). PSA: If you've used Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the This workflow allows you to change clothes or objects in an existing image If you know the required style, you can work with the IP-Adapter and upload a reference image And if you want to get new ideas or directions for design, you can create a large amount of variations in a process that is mostly automatic ComfyUI is a web-based Stable Diffusion interface optimized for workflow customization. The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. View community ranking In the Top 1% of largest communities on Reddit. For my task, I'm copy-and-pasting a subject image (transparent png) into a background, but then I want to do something to make it look like An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. You can also just load an image on the left side of the control net section and use it that way edit: if you use the link above, you'll need to replace the If you're an "ML first" developer like myself, sure it makes more sense to work in *nix directly. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. Learned from the following video: Stable Cascade in ComfyUI Made Simple 6m 56s posted Feb 19, 2024 by How Do? channel on YouTube Stage A >> Skip to main content Open menu Open navigation Go to Reddit Home Hi all, I am looking to build out a workflow that allows me to basically make a hyper realistic avatar with a consistent face, and a consistent body Skip to main content Open menu Open navigation Go to Reddit Home I need a workflow to simultaneously inpaint and apply controlnet to the inpainted region. ComfyUI (AnimateDiff) - DaVinci Resolve - Udio /r/StableDiffusion is back open after the protest of Reddit killing open API access A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. However I didn't manage to find any option to set output format in ComfyUI. The background to the Welcome to the unofficial ComfyUI subreddit. I can already use wildcards in ComfyUI via Lilly Nodes, but there's no node I know of that makes it possible to call one or more LoRAs from a text prompt. I've of course uploaded the full workflow to a site linked in the description of the video, nothing I do is ever paywalled or patreoned. Also, if this is new and exciting to you, feel free to post, but don't spam all your work. Within the folder you will find a ComyUI_Simple_Workflow. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. AP Workflow 8. For example, it would be very cool if one could place the node numbers on a grid (of All posts must be Open-source/Local AI image generation related Posts should be related to open-source and/or Local AI image generation only. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. After that I can choose the images that I want to drag on ComfyUI with the corresponding seed number displayed. In the Custom ComfyUI Workflow drop-down of the plugin window, I chose the real_time_lcm_sketching_api. And above comfy uis inpainting and masking aint perfect. Comparisons and discussions across different platforms are encouraged. A quick question for people with more experience with ComfyUI than me. ComfyUI breaks down the workflow into rearrangeable elements, allowing you to effortlessly create your custom workflow. And above all, BE NICE. ) ReVision, Upscalers, Prompt Builder, Debug, etc. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. Just ctrl+V into ComfyUI, anywhere works if you have a ComfyUI workflow. In my comfyUI workflow I set the resolutions to 1024 to 1024 to save time during the upscaling, that can take more than 2 minutes, I also set the sampler to dpmm_2s_ancestral to obtain a good amount of detail, but this is also a slow sampler, and depending on the picture other samplers could work better. Ending Workflow. if this is new and exciting to you, feel free to post, but don't spam all your work. I just made the move from A1111 to ComfyUI a few days ago. How it works: Download & drop any image from the (Recap) We have hosted the first ComfyUI Workflow Contest last month and got lots of high quality workflows. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Here is the workflow for ComfyUI updated to a folder on google drive with both json and png of some of my workflows I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. There's also a bunch of BBOX and SEGM detectors on Civitai (search for Adetailer), sometimes it makes sense to combine a BBOX detector (like Face) with a SEGM detector (like skin) to really 39 votes, 12 comments. 1 what slightly improved my results for a specific case Welcome to the unofficial ComfyUI subreddit. I'm into it. It's part of a full scale SVD+AD+Modelscope workflow I'm building for creating meaningful videos scenes with stable diffusion tools, including a puppeteering engine. There are many regional conditioning solutions available, but as soon as you try to add LoRA data to the conditioning channels, the LoRA data seems to overrun the whole generation. Then open or drop the PNG in CompyUI. png) Txt2Img workflow Welcome to the unofficial ComfyUI subreddit. An example of the images you Hi everyone, this is John from OpenArt. You can grab the base SDXL inpainting model here. (For example, all images on this page have it) Any way to edit comfyui images without losing Belittling their efforts will get you banned. Impact Pack has SEGS if you want to have fine control (like filtering for male faces, largest n faces, apply a controlnet to the SEGS, ) or just a node called Facedetailer. I'll do you one better, and send you a png you can directly load into Comfy. Installation: Follow Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the If you’re completely new to LoRA training, you’re probably looking for a guide to understand what each option does. You're free to alter the storyline as GPT accompanies you throughout, turning your imagination into stunning visuals. png. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. The shortcut does nothing either after "Copying Generation data" You need an image that was created with ComfyUI to get the "Workflow: xx Nodes" with the ComfyUI workflow. Belittling their efforts will get you banned. Comfy has clearly taken a smart and logical approach with the workflow GUI, at least from a programmer's point of view. ControlNet Add any workflow to any arbitrary PNG with this simple tool: https://rebrand. 0 for ComfyUI - Now with a next-gen upscaler (competitive against Magnific AI and Topaz Gigapixel!) and higher quality mask inpainting with Fooocus Also, if this is new and exciting to you, feel free to post, but don't spam all your work. Here are the models that you will need to run this workflow:- Loosecontrol Model ControlNet_Checkpoint Updated ComfyUI Workflow: SDXL (Base+Refiner) + XY Plot + Control-LoRAs + ReVision + ControlNet XL OpenPose + Upscaler I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Perfect to run on a Raspberry Pi or a local server. ComfyFlowApp: From comfyui workflow to Web App, in seconds Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. This is something I have been chasing for a while. Starting workflow. 25K subscribers in the comfyui community. Upcoming tutorial - SDXL Lora + using 1. Take an amazing AI adventure through colorful, alive forests. I’m new to ComfyUI and I’m aware that people create amazing stuff with just prompts and detailers. It is divided into distinct blocks, which can be activated with switches: . A lot of people are just discovering this technology, and want to show off what they created. This is an interesting implementation of that idea, with a lot of potential. With it (or any other "built-in" workflow located in the native_workflow directory), I always get this error: Preparation work (not in comfyui) - Take a clip and remove the background (can be made with any video editor which rotobrush or, as in my case, with RunwayML) - Extract the frames from the clip (in my case with ffmpeg) - Copy the frames in the corresponding input folder (important, saved as 000XX. but this workflow should also help people learn about modular layouts, control systems and a bunch of modular nodes I use in conjunction to create good images. pngs of metadata. ai) and Picture Me Turbo (InstantID with Image2Image) | ComfyUI Workflow (openart. Share, discover, & run ComfyUI workflows. First of all, sorry if this has been covered before, i did search and nothing came back. com or https://imgur. So much fun all around. However, Load the upscaled image to the workflow, use ComfyShop to draw a mask and inpaint. /r/StableDiffusion is back open after the protest of Reddit killing open API access Welcome to the unofficial ComfyUI subreddit. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. ) Hi. Even with 4 regions and a global condition, they just combine them all 2 at a If I understand correctly, the best (or maybe the only) way to do it is with the plugin using ComfyUI instead of A4. By connecting various blocks, referred to as nodes, you can construct an image generation workflow. Many of the workflow examples can be copied either visually or by downloading a shared file containing the workflow. Though they have the same seed value, ComfyUI generates different latent noise for each item in the batch. video animation using comfyui+ Typical SD workflow involves spamming gens and see what sticks. 9. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. 5. 0 download links and If you mean workflows they are embedded into the png files you generate, simply drag a png from your output folder onto the ComfyUI surface to restore the workflow. A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. ComfyUI will check (from left to right, if the workflow is linear) if anything has changed and run only nodes with changed input (image/latent), seed, or other parameters. Multiple characters from separate LoRAs interacting with each other. "comfyUI workflows" No workflows on video just selfpromote webpage. Comfy UI is actually very good, it has many capabilities that are simply beyond other interfaces. Check out Nerdy Rodent's series about his "reposer" workflows on YouTube. . I've been using a ComfyUI workflow, but I've run into issues that I haven't been able to resolve, even with ChatGPT's help. Available for free at home-assistant. Apparently the dev uploaded some version with trimmed data But generally speaking, workflows seen on GitHub can Hey all, been using ComfyUI for a couple months and absolutely love it. 4 with switch) This will run the workflow once, on a single seed, and generate three images all with the same seed. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. You make choices from the 4 options provided by GPT and the game continues to unfold. K12sysadmin is for K12 techs. png" in the file list on the top, and then you should click Download Raw File, but alas, in this case the workflow does not load. I am trying to find a workflow to automate by learning the manual steps (blender+etc. Just started with ComfyUI and really love the drag and drop workflow feature. I also use the comfyUI manager to take a look at the various custom nodes available to see what interests me. Instead, I created a simplified 2048X2048 workflow. izlcg todd hesch ctwfe fgs rjlb lamdmt eckwjf udy bgcm

--