Horizons: The OmniChat

A flexible and powerful chatbot platform that brings enterprise-grade LLM capabilities to your infrastructure.

View project on GitHub

Frequently Asked Questions (FAQ)

General Questions

What is Horizons OmniChat?

Horizons OmniChat is an open-source chatbot platform that brings enterprise-grade LLM capabilities to your infrastructure, with flexible deployment options including local, hybrid, and AWS modes.

What makes Horizons different from other chatbot platforms?

Which deployment mode should I choose?

Mode Best For Requirements Key Benefits
Local Development, Testing, Privacy 8GB RAM, Docker Complete Control
Hybrid Production, Cost-Effective AWS Account, 8GB RAM Flexibility
AWS Enterprise, Scalability AWS Account Full Cloud Benefits

Technical Questions

What are the system requirements?

See our detailed System Requirements guide, but in general:

Which models are supported?

Local Models (via Ollama)

Cloud Models (via AWS Bedrock)

How do I update to the latest version?

# Local/Hybrid Mode
git pull origin main
make local-down   # or hybrid-down
docker compose pull
make local-up     # or hybrid-up

# AWS Mode
git pull origin main
make aws-plan
make aws-apply

Deployment Questions

Can I deploy Horizons in my own datacenter?

Yes! The Local and Hybrid modes are specifically designed for on-premises deployment.

How do I scale Horizons?

Local/Hybrid Mode

AWS Mode

How do I backup my data?

See our detailed Backup Guide, but in general:

# Local/Hybrid Mode
docker exec open-webui-db pg_dump -U $POSTGRES_USER $POSTGRES_DB > backup.sql

# AWS Mode
aws rds create-db-snapshot --db-instance-identifier horizons-persistence-db

Security Questions

How is data protected?

Does Horizons comply with GDPR/HIPAA/SOC2?

How are updates and security patches handled?

Enterprise Questions

What enterprise features are available?

How do I get enterprise support?

Contact our Enterprise Support Team for:

Can I customize Horizons for my organization?

Yes! Options include:

Troubleshooting

Common Issues

Model Loading Issues

# Check Ollama status
docker logs ollama
docker exec ollama ollama list

Performance Issues

# Check resource usage
docker stats
nvidia-smi  # if using GPU

Connection Issues

# Verify services
docker compose ps
curl http://localhost:3002/health

See our Troubleshooting Guide for more details.

Getting Help

Where can I find documentation?

How do I report issues?

  1. Check existing GitHub Issues
  2. Create a new issue with:
    • Deployment mode
    • Error messages
    • Steps to reproduce
    • System information

How do I contribute?

See our Contributing Guide for:

Where can I get community support?


Horizons OmniChat by evereven