Engine System
OpenSNS uses a registry-based engine system for flexible AI backend configuration.
Engine Types
Section titled “Engine Types”| Type | Purpose | Implementations |
|---|---|---|
| LLM | Text generation | OpenAI, Ollama, Fallback |
| Image | Image generation | Fal.ai, ComfyUI |
| Video | Video generation | Fal.ai, Runway, ComfyUI |
Engine Registry
Section titled “Engine Registry”Central registry for all AI engines:
from app.core.registry import engine_registry
# Register engines at startupengine_registry.register_llm_engine("openai", lambda: OpenAIAdapter())engine_registry.register_llm_engine("ollama", lambda: OllamaAdapter())engine_registry.register_llm_engine("fallback", lambda: FallbackLLMAdapter())
# Get engine by namellm = engine_registry.get_llm_engine("openai")LLM Adapters
Section titled “LLM Adapters”Interface
Section titled “Interface”class BaseLLMAdapter(ABC): @abstractmethod async def generate( self, prompt: str, system_prompt: Optional[str] = None, temperature: float = 0.7, ) -> str: passOpenAI Adapter
Section titled “OpenAI Adapter”Uses GPT-4 for high-quality generation:
class OpenAIAdapter(BaseLLMAdapter): async def generate(self, prompt, system_prompt=None, temperature=0.7): client = AsyncOpenAI(api_key=self.api_key) response = await client.chat.completions.create( model="gpt-4", messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": prompt} ], temperature=temperature ) return response.choices[0].message.contentOllama Adapter
Section titled “Ollama Adapter”For local/free LLM usage:
class OllamaAdapter(BaseLLMAdapter): async def generate(self, prompt, system_prompt=None, temperature=0.7): async with httpx.AsyncClient() as client: response = await client.post( f"{self.ollama_url}/api/generate", json={"model": "llama2", "prompt": prompt} ) return response.json()["response"]Fallback Adapter
Section titled “Fallback Adapter”Returns placeholder content when no API keys configured:
class FallbackLLMAdapter(BaseLLMAdapter): async def generate(self, prompt, **kwargs): return "[Fallback response - configure API keys for real generation]"Image Adapters
Section titled “Image Adapters”Interface
Section titled “Interface”class BaseImageAdapter(ABC): @abstractmethod async def generate_image( self, prompt: str, width: int = 1024, height: int = 1024, ) -> GenerationResult: passFal.ai Adapter
Section titled “Fal.ai Adapter”Fast cloud-based image generation:
class FalImageAdapter(BaseImageAdapter): async def generate_image(self, prompt, width=1024, height=1024): result = await fal.run( "fal-ai/flux/dev", arguments={ "prompt": prompt, "image_size": {"width": width, "height": height} } ) return GenerationResult(url=result["images"][0]["url"])Video Adapters
Section titled “Video Adapters”Interface
Section titled “Interface”class BaseVideoAdapter(ABC): @abstractmethod async def image_to_video( self, image_url: str, motion_prompt: str, duration: float = 5.0, ) -> VideoGenerationResult: passFal.ai Video
Section titled “Fal.ai Video”Converts static images to video:
class FalVideoAdapter(BaseVideoAdapter): async def image_to_video(self, image_url, motion_prompt, duration=5.0): result = await fal.run( "fal-ai/fast-svd-lcm", arguments={ "image_url": image_url, "motion_bucket_id": 127 } ) return VideoGenerationResult(video_url=result["video"]["url"])Adding Custom Engines
Section titled “Adding Custom Engines”- Create adapter class
class MyCustomAdapter(BaseLLMAdapter): async def generate(self, prompt, **kwargs): # Your implementation pass- Register in initializers.py
engine_registry.register_llm_engine( "my-custom", lambda: MyCustomAdapter())- Configure as default
DEFAULT_LLM_ENGINE=my-customEngine Selection Priority
Section titled “Engine Selection Priority”- User settings (per-user configuration)
- Environment variable default
- Fallback engine