Horizons: The OmniChat

A flexible and powerful chatbot platform that brings enterprise-grade LLM capabilities to your infrastructure.

View project on GitHub

Architecture Components

Understanding the components that make up Horizons OmniChat is crucial for successful deployment and operation. Let’s explore how each piece works together to create a powerful, flexible chatbot platform.

Core Components: The Building Blocks of Horizons

Horizons consists of three primary components, each carefully designed to handle specific aspects of the platform’s functionality.

Open WebUI: Gateway to AI Interaction

Open WebUI serves as more than just an interface - it’s the command center of your Horizons deployment. Built with modern technologies, it provides a seamless experience for both users and administrators.

At its foundation, Open WebUI combines:

This powerful combination enables:

Ollama: Local Intelligence Engine

Ollama represents our commitment to providing powerful AI capabilities directly within your infrastructure.

Key capabilities include:

Bedrock Gateway: Bridge to Cloud AI

The Bedrock Gateway exemplifies our approach to hybrid capabilities, providing seamless access to AWS’s powerful AI models while maintaining security and control. This component acts as an intelligent intermediary, handling:

Component Interactions

Understanding how these components work together is crucial for optimal deployment. Let’s explore the interaction patterns across different deployment modes:

Local Mode: Privacy-First Architecture

graph LR
    User --> WebUI
    WebUI --> Ollama
    Ollama --> LocalModels

In Local mode, components interact within your infrastructure, ensuring complete data privacy while maintaining full functionality. This architecture is perfect for:

Hybrid Mode: The Best of Both Worlds

graph LR
    User --> WebUI
    WebUI --> Ollama
    WebUI --> BedrockGateway
    Ollama --> LocalModels
    BedrockGateway --> AWSBedrock

Hybrid mode represents our flexible approach to deployment, combining local processing power with cloud capabilities. This architecture excels in:

AWS Mode: Enterprise-Scale Architecture

graph LR
    User --> ALB
    ALB --> WebUI-ECS-Fargate
    WebUI-ECS-Fargate --> Ollama-ECS-EC2
    WebUI-ECS-Fargate --> BedrockGateway-ECS-Fargate
    BedrockGateway-ECS-Fargate --> AWSBedrock
    Ollama-ECS-EC2 --> InstalledModels

AWS mode delivers enterprise-grade scalability and reliability. This sophisticated architecture provides:

Understanding Data Flows: How Information Moves Through Horizons

The true power of Horizons lies not just in its components, but in how they work together to process and manage information. Let’s explore the key data flows that make everything work:

Chat Request Flow: From User to AI and Back

When a user interacts with Horizons, a sophisticated sequence of events occurs:

  1. The user’s message begins its journey through WebUI
  2. Our validation layer ensures the request meets all security and format requirements
  3. Smart routing directs the request to either Ollama or Bedrock based on the selected model
  4. The AI model processes the request and generates a response
  5. The interaction is securely stored in PostgreSQL for future reference

This entire process happens in milliseconds, providing a seamless experience while maintaining security and reliability.

Model Management: Keeping AI Updated and Optimized

Our model management flow ensures you always have the right AI models ready when needed:

  1. Administrators can easily select and manage models through the intuitive interface
  2. Ollama handles the secure download and installation of models
  3. Each model is automatically optimized for your specific hardware configuration
  4. Detailed model metadata is maintained for optimal performance and management

Authentication: Keeping Your System Secure

Security is paramount in Horizons, and our authentication flow reflects this:

  1. Every user request goes through robust authentication
  2. Credentials are validated against your security policies
  3. Secure session tokens are generated for ongoing interactions
  4. All subsequent requests are validated using these tokens

This ensures that every interaction is secure while maintaining a smooth user experience.

Scaling for Growth: Adapting to Your Needs

As your usage grows, Horizons grows with you. Our scaling capabilities ensure your system maintains performance under any load:

Local and Hybrid Scaling: Optimizing Local Resources

In Local and Hybrid modes, we focus on maximizing your infrastructure’s potential:

AWS Mode: Enterprise-Grade Scalability

AWS mode unleashes the full power of cloud scaling:

Keeping Everything Healthy: Monitoring and Maintenance

Maintaining a healthy system requires vigilant monitoring. Horizons provides comprehensive health monitoring capabilities:

Health Checks

Each component provides detailed health information:

Performance Metrics: Understanding The System

We track crucial metrics to ensure optimal performance:

Securing Your Deployment: A Multi-Layered Approach

Security isn’t just a feature in Horizons - it’s a fundamental aspect of every component:

Component-Level Security

We implement multiple security layers:

Your Next Steps

Ready to dive deeper? Here’s where to go next:


Horizons OmniChat by evereven