• Lang English
  • Lang French
  • Lang German
  • Lang Italian
  • Lang Spanish
  • Lang Arabic


PK1 in black
PK1 in red
PK1 in stainless steel
PK1 in black
PK1 in red
PK1 in stainless steel
Comfyui workflow examples reddit

Comfyui workflow examples reddit

Comfyui workflow examples reddit. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. K12sysadmin is for K12 techs. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). Its a simpler setup than u/Ferniclestix uses, but I think he likes to generate and inpaint in one session, where I generate several images, then import them and inpaint later (like this) Welcome to the unofficial ComfyUI subreddit. 1 checkpoint. 150 workflow examples of things I created with ComfyUI and ai models from Civitai This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. 1 with ComfyUI Get the Reddit app Scan this QR code to download the app now Here are approx. Please share your tips, tricks, and workflows for using this software to create your AI art. The sample prompt as a test shows a really great result. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. This workflow by Antzu is a good example of prompt scheduling, which is working well in Comfy thanks to Fitzdorf's great work. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. You can find the Flux Dev diffusion model weights here. You can find the workflow here and the full image with metadata here. Thats where I'd gotten my second workflow I posted from, which got me going. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. sft file in your: ComfyUI/models/unet/ folder. I found it very helpful. Flux Schnell is a distilled 4 step model. 1 or not. 0 for ComfyUI. Welcome to the unofficial ComfyUI subreddit. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Everything else is the same. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. comfy uis inpainting and masking aint perfect. Img2Img ComfyUI workflow. But for a base to start at it'll work. this is just a simple node build off what's given and some of the newer nodes that have come out. You can then load or drag the following image in ComfyUI to get the workflow: 6 min read. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. 1. But let me know if you need help replicating some of the concepts in my process. 1 ComfyUI install guidance, workflow and example. 86s/it on a 4070 with the 25 frame model, 2. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel It works by converting your workflow. json files into an executable Python script that can run without launching the ComfyUI server. Ending Workflow. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt. Merging 2 Images together. Here is an example of 3 characters each with its own pose, outfit, features, and expression : Left : woman wearing full armor, ginger hair, braids hair, hands on hips, serious Middle : girl, princess dress, blonde hair, tiara, jewels, sitting on a throne, blushing Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. hopefully this will be useful to you. To add content, your account must be vetted/verified. Step 2: Download this sample Image. ControlNet Depth ComfyUI workflow. but mine do include workflows for the most part in the video description. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. EDIT: For example this workflow shows the use of the other prompt windows. I put the workflow to test by creating people with hands etc. You can't change clipskip and get anything useful from some models (SD2. and it got very good results. Just my two cents. Upcoming tutorial - SDXL Lora + using 1. The example images on the top are using the "clip_g" slot on the SDXL encoder on the left, but using the default workflow CLIPText on the right. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to I recently switched from A1111 to ComfyUI to mess around AI generated image. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. Breakdown of workflow content. So. Put the flux1-dev. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Just bse sampler and upscaler. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. 1; Flux Hardware Requirements; How to install and use Flux. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Create animations with AnimateDiff. . Civitai has few workflows as well. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] Warning. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. 75s/it with the 14 frame model. Please keep posted images SFW. In addition, I provide some sample images that can be imported into the program. Seems very hit and miss, most of what I'm getting look like 2d camera pans. Starting workflow. Upscaling ComfyUI workflow. I originally wanted to release 9. 4 - The best workflow examples are through the github examples pages. Comfy Workflows Comfy Workflows. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Muter nodes) and Context Switches. of course) To make differences somewhat easiser to see, the above image is at 512x512. K12sysadmin is open to view and closed to post. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). Jul 28, 2024 · Over the last few months I have been working on a project with the goal of allowing users to run ComfyUI workflows from devices other than a desktop as ComfyUI isn't well suited to run on devices with smaller screens. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. I think it was 3DS Max. No Loras, no fancy detailing (apart from face detailing). Hi everyone, I’m working on a project to generate furnished interiors from images of empty rooms using ComfyUI and Stable Diffusion, but I want to avoid using inpainting. Inside the workflow, you will find a box with a note containing instructions and specifications on the settings to optimize its use. WAS suite has some workflow stuff in its github links somewhere as well. Only the LCM Sampler extension is needed, as shown in this video. https://youtu. Ignore the prompts and setup That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. I then just sort of pasted them together. all in one workflow would be awesome. 4. 5 with lcm with 4 steps and 0. Aug 2, 2024 · Flux Dev. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Flux. It covers the following topics: Introduction to Flux. or through searching reddit, the comfyUI manual needs updating imo. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. AP Workflow 9. Table of contents. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. You can encode then decode bck to a normal ksampler with an 1. This by Nathan Shipley didn't use this exact workflow but is a great example of how powerful and beautiful prompt scheduling can be: Share, discover, & run thousands of ComfyUI workflows. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. I think perfect place for them is Wiki on GitHub. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. 1; Overview of different versions of Flux. You can construct an image generation workflow by chaining different blocks (called nodes) together. be/ppE1W0-LJas - the tutorial. For your all-in-one workflow, use the Generate tab. Surprisingly, I got the most realistic images of all so far. The idea of this workflow is to sample different parts of the sigma_min, cfg_scale, and steps space with a fixed prompt and seed. (for 12 gb VRAM Max is about 720p resolution). you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. Is there a workflow with all features and options combined together that I can simply load and use ? 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. com/. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. Workflow Image with generated image But standard A1111 inpaint works mostly same as this ComfyUI example you provided. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. (Same seed, etc, etc. That being said, here's a 1024x1024 comparison also. The examples were generated with the RealisticVision 5. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Still working on the the whole thing but I got the idea down And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. Workflow. LoRA selector, (for example, download SDXL LoRA example from StabilityAI, put into ComfyUI\models\lora\) VAE selector, (download default VAE from StabilityAI, put into \ComfyUI\models\vae\), just in case in the future there's better VAE or mandatory VAE for some models, use this selector Restart ComfyUI Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. SDXL Default ComfyUI workflow. second pic. The video is just a screenshot of the workflow I used in ComfyUI to get the output files. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. We would like to show you a description here but the site won’t allow us. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. This guide is about how to setup ComfyUI on your Windows computer to run Flux. A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. kzs eslzoh gwsttj rxjffyw xyoff ijlqkwi cechj tnyw ffzjzt ycw