Licensing
What license is Flux Dev released under?
Flux Dev is released under the FLUX.1 [dev] Non-Commercial License initially, which was later updated to a more permissive license. Check the current license on the official repository for the latest terms. Key points:
- Research and personal use: Permitted
- Commercial use: Check current license terms (license has been updated since initial release)
- Derivative works: Permitted with attribution
- Model redistribution: Permitted with license inclusion
What license is Flux Schnell released under?
Flux Schnell uses the Apache 2.0 license — the most permissive option:
- Commercial use: Fully permitted
- Modification: Permitted
- Distribution: Permitted
- Private use: Permitted
- Patent use: Permitted
For commercial products, Flux Schnell provides the clearest licensing path.
What about Flux Pro licensing?
Flux Pro is available only via API. Commercial usage rights are included in the API pricing — you pay per generation and can use all generated images commercially. No separate license purchase is required.
Can I sell images generated by Flux?
Flux Pro (API): Yes. Commercial usage is included in API pricing. Flux Schnell: Yes. Apache 2.0 places no restrictions on output usage. Flux Dev: Check current license terms for commercial use provisions.
Do generated images have copyright?
The copyright status of AI-generated images varies by jurisdiction. In most cases:
- You have usage rights to images you generate
- Black Forest Labs does not claim ownership of generated outputs
- Copyright registration may or may not be available depending on your jurisdiction
- Consult legal counsel for specific commercial applications
LoRA Fine-Tuning
Can I fine-tune Flux models?
Yes. Both Flux Dev and Flux Schnell can be fine-tuned using LoRA (Low-Rank Adaptation). Flux Pro cannot be fine-tuned (API-only access).
What do I need for LoRA training?
Hardware:
- Minimum: GPU with 16GB VRAM (RTX 4060 Ti 16GB, RTX 3090)
- Recommended: GPU with 24GB VRAM (RTX 4090, A5000)
- For faster training: Multi-GPU setup
Software:
- Kohya_ss (most popular Flux LoRA training tool)
- SimpleTuner (alternative training framework)
- ai-toolkit by ostris (lightweight option)
Training Data:
- 15-50 high-quality images for the target concept/style
- Consistent quality and style across training images
- Accompanying text captions describing each image
- Higher resolution is better (1024×1024 minimum)
How long does LoRA training take?
| GPU | Training Steps | Approximate Time |
|---|---|---|
| RTX 3090 24GB | 1,000 | ~2 hours |
| RTX 4090 24GB | 1,000 | ~1.5 hours |
| A100 80GB | 1,000 | ~45 minutes |
Most LoRAs converge between 500-2,000 training steps. Start with 1,000 steps and evaluate.
What training parameters should I use?
Recommended starting parameters:
- Learning rate: 1e-4 for standard LoRA
- LoRA rank: 16-32 (higher = more capacity, more VRAM)
- Batch size: 1-2 (limited by VRAM)
- Resolution: 1024×1024
- Optimizer: AdamW8bit
- Scheduler: Cosine with warmup
- Training steps: 1,000-2,000
These are starting points. Optimal parameters vary by training data and target concept. Experimentation is expected.
How do I use a LoRA with Flux?
In ComfyUI:
- Place the LoRA file in the
models/loras/directory - Add a LoRA Loader node connected to the Flux model loader
- Set the LoRA weight (0.5-1.0 is typical; start with 0.7)
- Generate as normal — the LoRA influences the output style/content
In Diffusers (Python):
from diffusers import FluxPipeline
import torch
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev")
pipe.load_lora_weights("path/to/your/lora.safetensors")
pipe.to("cuda")
image = pipe("your prompt", num_inference_steps=50).images[0]
Can I share or sell my LoRA?
Yes. LoRA files are derivative works and can be distributed under the terms of the base model’s license. Common distribution platforms:
- CivitAI (largest community, easy sharing)
- Hugging Face (developer-oriented)
- Personal websites
API Rate Limits
What are the rate limits for Flux Pro API?
Rate limits vary by provider:
| Provider | Free Tier | Paid Tier | Enterprise |
|---|---|---|---|
| BFL Direct | 1 req/sec | 10 req/sec | Custom |
| Replicate | 10 req/min (free) | Based on plan | Custom |
| Fal.ai | 5 req/min (free) | 60 req/min | Custom |
| Together AI | Based on credits | Based on plan | Custom |
What happens when I hit rate limits?
Most providers return an HTTP 429 (Too Many Requests) with a Retry-After header. Best practices:
- Implement exponential backoff
- Use a request queue to manage concurrency
- Batch non-urgent requests during off-peak times
- Contact the provider for increased limits if needed
Is there a maximum image resolution via API?
| Provider | Maximum Resolution | Maximum Megapixels |
|---|---|---|
| BFL Direct | 2048×2048 | ~4MP |
| Replicate | 2048×2048 | ~4MP |
| Fal.ai | 2048×2048 | ~4MP |
For higher resolutions, generate at maximum supported resolution and upscale using dedicated upscaling models.
Hardware Requirements
What’s the minimum GPU for each Flux model?
| Model | Minimum VRAM | Recommended VRAM | Notes |
|---|---|---|---|
| Flux Pro | N/A (API only) | N/A | Cloud-based |
| Flux Dev | 12GB (with FP8) | 24GB | FP8 quantization required for 12GB |
| Flux Schnell | 8GB (with FP8) | 16GB | Fastest on 24GB+ |
Can I run Flux on CPU only?
Technically possible for Flux Schnell (1-step generation) but impractical — generation takes 5-10 minutes per image vs. 0.5 seconds on GPU. Not recommended for any production or regular use.
VRAM optimization techniques
If your GPU has limited VRAM:
- FP8 quantization: Reduces model precision, halving VRAM requirements with minimal quality loss
- CPU offloading: Moves inactive model components to RAM, reducing peak VRAM usage
- Attention slicing: Processes attention in chunks rather than all at once
- Tiled VAE: Decodes the image in tiles rather than all at once
ComfyUI supports all these optimizations through node configurations.
Output Specifications
What output formats does Flux support?
The model generates raw pixel data. Output format depends on your pipeline:
- PNG: Lossless, largest file size (~2-5MB per 1024×1024)
- JPEG: Lossy compression, smallest file size (~200-500KB)
- WebP: Good balance of quality and size (~300-800KB)
- BMP: Uncompressed, very large
- TIFF: Lossless, suitable for professional workflows
What’s the recommended number of inference steps?
| Model | Minimum | Recommended | Maximum (diminishing returns) |
|---|---|---|---|
| Flux Pro | N/A (API handles) | N/A | N/A |
| Flux Dev | 20 | 30-50 | 50 |
| Flux Schnell | 1 | 4 | 4 |
What guidance scale should I use?
Flux uses a different guidance mechanism than Stable Diffusion. Recommended values:
- Flux Dev: 3.0-4.0 (default 3.5)
- Flux Schnell: 0 (distilled model, guidance not applicable)
Higher guidance produces more prompt-adherent but potentially less natural images. Lower guidance produces more varied but potentially less accurate results.
Troubleshooting
Generated images look blurry or low-quality
- Increase inference steps (try 40-50 for Flux Dev)
- Check resolution — ensure you’re generating at 1024×1024 or higher
- Verify FP8 quantization isn’t over-compressing (try FP16 if VRAM allows)
- Update to the latest model weights
Text in images is garbled or incorrect
- Keep text short (1-5 words is most reliable)
- Put text in quotes within the prompt:
a sign reading "OPEN" - Use Flux Pro for best text rendering
- Generate multiple candidates and select the best text rendering
- Consider adding text in post-processing for critical applications
Generation is very slow
- Verify GPU is being used (check
nvidia-smiduring generation) - Reduce inference steps
- Lower resolution
- Close other GPU-consuming applications
- Use Flux Schnell for speed-critical applications
- Ensure you have the latest CUDA drivers
Out of memory errors
- Enable FP8 quantization
- Enable CPU offloading
- Enable attention slicing
- Reduce resolution
- Use Flux Schnell (lower VRAM requirements)
- Close other applications using GPU memory
References
- Black Forest Labs: blackforestlabs.ai
- Flux GitHub Repository: github.com/black-forest-labs/flux
- Hugging Face Flux Models: huggingface.co/black-forest-labs
- ComfyUI: github.com/comfyanonymous/ComfyUI
- Kohya_ss LoRA Training: github.com/kohya-ss/sd-scripts
- Apache License 2.0: apache.org/licenses/LICENSE-2.0
- CivitAI Flux Models: civitai.com