ComfyUI Engineer
About the Role
We use ComfyUI as our inference engine for AI video generation. We need someone who lives and breathes ComfyUI — someone who builds custom workflows, writes custom nodes, and knows how to squeeze every bit of performance out of video models.
You'll work on the workflows that power every video our platform generates. When a user hits 'Create Video', your workflow is what turns their prompt into a finished video. Quality, speed, and reliability all matter.
This role sits at the intersection of AI and engineering. You need to understand how diffusion models work, but more importantly, you need to make them work in production — reliably, fast, and at scale.
What You'll Do
- Design and maintain ComfyUI workflows for text-to-video and image-to-video generation
- Build custom ComfyUI nodes for our specific pipeline needs
- Optimize workflow performance — faster inference, lower VRAM usage, better quality
- Integrate new video models into ComfyUI as they're released
- Package workflows for deployment on RunPod serverless infrastructure
- Debug production issues with video quality, generation failures, and edge cases
- Experiment with model parameters, sampling methods, and scheduling to improve output quality
What We're Looking For
- Deep experience with ComfyUI — you've built complex workflows, not just used the defaults
- Understanding of diffusion models (scheduling, sampling, guidance, LoRA, ControlNet)
- Python skills for custom node development
- Experience with video generation models (HunyuanVideo, Wan2.1, AnimateDiff, or similar)
- Comfortable with GPU environments, CUDA, and VRAM optimization
- Ability to work independently and figure things out from model repos and papers
Nice to Have
- Published custom ComfyUI nodes or workflows
- Experience with RunPod deployment and serverless handlers
- Familiarity with Docker and containerized inference pipelines
- Experience with audio generation models (for video soundtracks)
- Background in video post-processing or VFX