Engine Setup Guide
OpenSNS uses a pluggable engine system. Each engine type, LLM, Image, Video, and UGC, can be swapped independently. You can set defaults globally with environment variables, then let individual users override them in the Settings UI when they add their own API keys.
Overview
Section titled “Overview”Engine configuration is split into two layers:
- Global defaults from environment variables
- Per-user overrides from Settings > API Keys
User-provided keys always take priority over global defaults. That makes it easy to run one shared deployment, while still letting power users bring their own provider accounts.
LLM Engines
Section titled “LLM Engines”LLM engines power ad copy, strategy, research, and other text generation tasks.
OpenAI
Section titled “OpenAI”Recommended for production because it is the default, broadly supported, and easy to set up.
Required:
OPENAI_API_KEY=sk-your-keyOptional:
OPENAI_MODEL=gpt-4oSet the default engine:
DEFAULT_LLM_ENGINE=openaiNotes:
- OpenSNS uses
OPENAI_BASE_URLfor OpenAI-compatible endpoints, and it defaults tohttps://api.openai.com/v1 - If you want the standard OpenAI API, you only need
OPENAI_API_KEY
Anthropic
Section titled “Anthropic”Use Anthropic when you want Claude models for text generation.
Required:
ANTHROPIC_API_KEY=your-anthropic-keySet the default engine:
DEFAULT_LLM_ENGINE=anthropicNotes:
- Choose this when you want Claude-based copywriting or research
- Make sure the Anthropic key is configured globally or per user in Settings > API Keys
Gemini
Section titled “Gemini”Use Google Gemini when you want Google’s LLMs for text generation.
Required:
GOOGLE_API_KEY=your-google-api-keySet the default engine:
DEFAULT_LLM_ENGINE=geminiNotes:
- This uses the
GOOGLE_API_KEYsetting from backend config - Make sure your Google key is available before selecting Gemini as the default
Use Groq when you want fast inference with supported open-source models.
Required:
GROQ_API_KEY=your-groq-keySet the default engine:
DEFAULT_LLM_ENGINE=groqNotes:
- This is a good option when speed matters more than model variety
- Make sure your Groq key is configured before switching the default engine
Ollama
Section titled “Ollama”Good for self-hosted and local development setups.
Required:
- A running Ollama instance
Configure the endpoint and default engine:
OLLAMA_URL=http://localhost:11434DEFAULT_LLM_ENGINE=ollamaNotes:
- Ollama is useful when you want to avoid external API usage
- Make sure the Ollama server is reachable from the OpenSNS backend
Custom API
Section titled “Custom API”Use this option for any OpenAI-compatible provider, including Fireworks AI, Together AI, Groq-compatible gateways, and similar services.
Configure the shared OpenAI-style settings:
OPENAI_API_KEY=your-provider-keyOPENAI_BASE_URL=https://your-provider.example/v1OPENAI_MODEL=your-model-nameDEFAULT_LLM_ENGINE=openaiNotes:
- Keep the
DEFAULT_LLM_ENGINEset toopenaiwhen the provider follows the OpenAI API format - Change only the base URL, key, and model name
Image Engines
Section titled “Image Engines”Image engines generate ad creative images.
Fal.ai
Section titled “Fal.ai”Recommended cloud option for production.
Required:
FAL_KEY=your-fal-keySet the default engine:
DEFAULT_IMAGE_ENGINE=falNotes:
- Fal.ai supports image models such as Flux Dev and Flux Pro
- This is the easiest option if you want managed infrastructure and strong model quality
Flux Pro
Section titled “Flux Pro”Use Flux Pro when you want another Fal.ai-backed image option with a stronger model tier.
Required:
FAL_KEY=your-fal-keySet the default engine:
DEFAULT_IMAGE_ENGINE=flux-proNotes:
- This uses the same Fal.ai key as the standard image engine
- Choose Flux Pro if you want to switch models without changing providers
ComfyUI
Section titled “ComfyUI”Best for self-hosted deployments where you want full control over models and workflows.
Required:
- A running ComfyUI instance
Configure the endpoint and default engine:
COMFYUI_URL=http://localhost:8188DEFAULT_IMAGE_ENGINE=comfyuiNotes:
- Install the appropriate Stable Diffusion or Flux models in your ComfyUI environment
- Make sure the backend can reach the ComfyUI server at the configured URL
Video Engines
Section titled “Video Engines”Video engines are used for image-to-video generation.
Fal.ai Video
Section titled “Fal.ai Video”Recommended cloud option for production.
Required:
FAL_KEY=your-fal-keySet the default engine:
DEFAULT_VIDEO_ENGINE=fal-videoNotes:
- This uses the same
FAL_KEYas Fal.ai image generation - It is the simplest choice if you already use Fal.ai for images
Runway
Section titled “Runway”Cloud video generation option for teams that already have Runway access.
Required:
FAL_KEYfor the Fal.ai-backed Runway adapter
Set the default engine:
DEFAULT_VIDEO_ENGINE=runwayNotes:
- Use this if you want a hosted video provider separate from your image stack
- This adapter is configured with the same
FAL_KEYused for Fal.ai
ComfyUI Video
Section titled “ComfyUI Video”Self-hosted option for image-to-video workflows, including AnimateDiff-based setups.
Required:
- A running ComfyUI instance with video-capable workflows and models
Configure the endpoint and default engine:
COMFYUI_URL=http://localhost:8188DEFAULT_VIDEO_ENGINE=comfyui-videoNotes:
- This works best when your ComfyUI server already has the right motion models installed
- Use it if you want to keep generation fully local
UGC Engines
Section titled “UGC Engines”UGC engines generate AI avatar talking-head videos.
HeyGen
Section titled “HeyGen”Recommended cloud option for production.
Required:
HEYGEN_API_KEY=your-heygen-keySet the default engine:
DEFAULT_UGC_ENGINE=heygenNotes:
- HeyGen supports custom avatars and multiple voices
- This is the strongest default choice if you want polished avatar output without self-hosting
- Pick this engine in Settings > API Keys for the user account that should use it
Cloud UGC option for teams already using D-ID.
Required:
DID_API_KEY=your-d-id-keySet the default engine:
DEFAULT_UGC_ENGINE=d-idNotes:
- Use this when you want a hosted avatar provider with a different pricing or workflow fit
- Make sure your API key is available to the backend
- Pick this engine in Settings > API Keys for the user account that should use it
SadTalker
Section titled “SadTalker”Self-hosted, free option for avatar talking-head generation.
Required:
- A running SadTalker instance
Configure the endpoint and default engine:
SADTALKER_URL=http://localhost:5000Notes:
- Good for local or private deployments
- You are responsible for hosting and keeping the SadTalker service available
- In the current codebase, UGC engine selection is handled per user in Settings > API Keys rather than by a global default env var
Object Storage
Section titled “Object Storage”Object storage is required for production. There is no fallback for generated assets, so configure storage before running campaigns.
OpenSNS supports S3-compatible storage, including AWS S3, Cloudflare R2, and MinIO.
Required variables:
STORAGE_ENDPOINT_URL=https://your-storage-endpointSTORAGE_ACCESS_KEY_ID=your-access-keySTORAGE_SECRET_ACCESS_KEY=your-secret-keySTORAGE_BUCKET_NAME=opensns-assetsSTORAGE_PUBLIC_URL=https://your-public-bucket-urlOptional:
STORAGE_REGION=autoCloudflare R2 example
Section titled “Cloudflare R2 example”STORAGE_ENDPOINT_URL=https://<accountid>.r2.cloudflarestorage.comSTORAGE_ACCESS_KEY_ID=<r2-access-key>STORAGE_SECRET_ACCESS_KEY=<r2-secret-key>STORAGE_BUCKET_NAME=opensns-assetsSTORAGE_PUBLIC_URL=https://pub-<bucket-id>.r2.devSTORAGE_REGION=autoAWS S3 example
Section titled “AWS S3 example”STORAGE_ENDPOINT_URL=https://s3.amazonaws.comSTORAGE_ACCESS_KEY_ID=<aws-access-key-id>STORAGE_SECRET_ACCESS_KEY=<aws-secret-access-key>STORAGE_BUCKET_NAME=opensns-assetsSTORAGE_PUBLIC_URL=https://<bucket-name>.s3.amazonaws.comSTORAGE_REGION=us-east-1MinIO example
Section titled “MinIO example”STORAGE_ENDPOINT_URL=http://localhost:9000STORAGE_ACCESS_KEY_ID=minioadminSTORAGE_SECRET_ACCESS_KEY=minioadminSTORAGE_BUCKET_NAME=opensns-assetsSTORAGE_PUBLIC_URL=http://localhost:9000/opensns-assetsSTORAGE_REGION=us-east-1Per-User Configuration
Section titled “Per-User Configuration”Users can override the global defaults in Settings > API Keys.
When a user adds their own key, OpenSNS uses that provider for their account instead of the shared global key. This is useful for BYOK deployments, agency workspaces, and mixed-provider setups.
Typical setup:
- Global defaults provide the main production engines
- Individual users can swap to their own LLM, image, video, or UGC provider
- User-provided keys take priority over environment defaults
Recommended Production Setup
Section titled “Recommended Production Setup”| Engine Type | Recommended Provider | Why |
|---|---|---|
| LLM | OpenAI, Anthropic, Gemini, or Groq | Choose the provider that best matches your model quality, latency, and cost needs |
| Image | Fal.ai | Simple cloud setup and strong image model support |
| Video | Fal.ai Video | Same key as image generation and a clean hosted workflow |
| UGC | HeyGen | Strong avatar and voice support for polished talking-head videos |
| Storage | Cloudflare R2 or AWS S3 | Durable, production-ready asset storage with S3 compatibility |
A practical production .env might look like this:
OPENAI_API_KEY=...OPENAI_MODEL=gpt-4oDEFAULT_LLM_ENGINE=openai
FAL_KEY=...DEFAULT_IMAGE_ENGINE=falDEFAULT_VIDEO_ENGINE=fal-video
HEYGEN_API_KEY=...
STORAGE_ENDPOINT_URL=...STORAGE_ACCESS_KEY_ID=...STORAGE_SECRET_ACCESS_KEY=...STORAGE_BUCKET_NAME=opensns-assetsSTORAGE_PUBLIC_URL=...For most teams, that combination gives the best balance of quality, setup speed, and operational simplicity.